[llvm-dev] llvm (the middle-end) is getting slower, December edition

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Sun Dec 18 02:17:10 PST 2016


On Sun, Dec 18, 2016 at 12:38 AM, Sean Silva <chisophugis at gmail.com> wrote:

>
>
> On Sat, Dec 17, 2016 at 8:39 PM, Daniel Berlin <dberlin at dberlin.org>
> wrote:
>
>>
>>>>
>>> LVI is one of those analyses with quadratic runtime, but has a cutoff to
>>> its search depth so that it is technically not quadratic. So increased
>>> inlining could easily exacerbate it more than non-"quadratic" passes.
>>> (increased inlining would also cause a general slowdown too).
>>>
>>>
>> LVI is only quadratic because of the way we've built it (it's actually
>> worse than quadratic,but let's just normalize to "quadratic").
>> Non-lazy versions are not quadratic, and you could likely build an
>> incremental lazy version that is also not quadratic in practice.
>> At some point, none of this will change unless people hold the line
>> somewhere. Compilers usually get slower 0.1% at a time, not in huge leaps
>> and bounds.
>> Without people saying "If we really want to get this case, in a thing not
>> really designed to get that case sanely, we need to stop and think about
>> it", it doesn't get thought about.
>>
>> Obviously, you can go too far into that extreme, but i think we are still
>> too far on one side of that one (at least, in most places in LLVM)
>>
>
> Good point. One thing that struck me when randomly browsing the GCC source
> code is that GCC seems to use far more sophisticated algorithms (e.g. lots
> of files with comments at the top citing different papers and things; not
> that counting citations means that much, but it's a rough proxy for keeping
> up with the literature). WIth LLVM, it seems like most of the stuff is
> homegrown or at least not frequently revisited with an eye to using an new
> algorithm. And once something starts picking up a bunch of special cases
> and things, it becomes harder and harder to replace with a new better
> thing, because nobody will want to throw the old thing away until the new
> thing covers the same set of test cases.
>

This is precisely what happened with GVN, for example (and happens with
other things, like InstCombine, like LVI, like BasicAA, ...).   All of
these things started by serving a purpose and with some useful design
lifetime.  They made sense for those things.  People push them past their
useful design lifetime and original purposes, but they don't get redesigned
or rethought.  Pretty soon, you have a mess that is very hard to clean up
for the reason you stated (in practice, you may have to split analysis or
passes, and accept that you can't and shouldn't try to catch absolutely
everything in every single pass).

Thankfully, a number of companies/people in our community can get the time
to spend on "technical debt" instead of just improving performance, but
it's, IMHO, still overloaded a little too much on the performance at any
cost side.
It happens with all compilers, of course, it's not specific to LLVM.

As for why GCC tends to use more literature, again IMHO:
GCC has more areas than LLVM with specific code owners, which has the
upside that someone is ostensibly thinking about these things and has some
thought where that area should go.   They can provide explicit guidance, at
at least make sure something is going in the right direction.   The
downside, of course, is that it comes with a bit of bureaucracy (though,
IMHO, LLVM has this in practice, it's just hidden more)., and sometimes a
little less innovation (again, varies)   It also means people have to not
view it as an honorific, and when they move on or change or stop caring or
what have you, that the area gets reassigned to someone else.  The other
downside is that it often involves saying "no, let's not do that, let's do
this instead" a lot, and that makes people feel bad when they know someone
else is just trying to achieve a task someone assigned them.

The only thing any of these approaches change, in the end, is the useful
lifetime of the end result.  It doesn't mean you end up with a thing that
is perfect, but you may get a thing that works well for 10 years instead of
5.  Whether that is the right tradeoff for what you are doing, also varies
wildly (it probably makes no sense to spend 6 months designing a thing that
is only meant to last 6 months).

Also, this isn't, of course, to say that homegrown is necessarily bad.
But doing it well at least requires having read and understood enough
theory to understand how to grow your own thing well, or be for a thing
small enough it just doesn't matter :)
Personally, I learned compilers and optimization in an environment
surrounded by people who had pretty much done everything we are trying to
do before, and so i always learned to go hunting to see what the state of
the art was, and the theories behind it, before i tried to roll my own.
That didn't always make the state of the art good, sane, or even
implementable, mind you, but it meant it at least got thought about, and
often, even when the papers were crazy, they had ideas or approaches you
could reuse.

--Dan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161218/cab430f5/attachment.html>


More information about the llvm-dev mailing list