[llvm-dev] Disable optimization on basic block level
Davide Italiano via llvm-dev
llvm-dev at lists.llvm.org
Mon Apr 24 21:00:10 PDT 2017
On Mon, Apr 24, 2017 at 8:28 PM, Daniel Berlin via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> On Mon, Apr 24, 2017 at 7:59 PM, Matthias Braun via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
>> > On Apr 24, 2017, at 5:30 PM, Joerg Sonnenberger via llvm-dev
>> > <llvm-dev at lists.llvm.org> wrote:
>> > On Mon, Apr 24, 2017 at 11:06:36AM -0700, Matthias Braun via llvm-dev
>> > wrote:
>> >> Would be cool to create a suite of extreme inputs, maybe a special llvm
>> >> test-suite module. This module would contain scripts that produce
>> >> extreme inputs (long basic blocks, deeply nested loops,
>> >> utils/create_ladder_graph.py, etc.) In fact I have a python script here
>> >> as well that generates a few variations of stuff that was interesting
>> >> to scheduling algos. It would just take someone to setup a proper
>> >> test-suite and a bot for it and I'd happily contribute more tests :)
>> > Well, I find limited motivation to "optimise" the compiler for artifical
>> > code. Trying to identify real world code that triggers expensive
>> > behavior is a much more useful exercise, IMO.
>> Well my practical experience so far has been that the function size for
>> most compiler inputs stays at a reasonable/smallish compared to how much
>> memory most computers have. I assume the humans that write that code can
>> only handle a per-function complexity up to a certain level.
> Mostly :)
> But in practice, humans end up triggering the limits most of the time, it
> just takes longer.
> I've rarely seen an N^3 or worse algorithm in a compiler that didn't end up
> a problem over time for real code.
> We aren't usually talking about things that are fundamentally impossible in
> faster time bounds.
> We are talking about things that we've skimped on the implementation for
> various reasons (sometimes very good!) and decided not to do faster.
> IE to me there is a big difference between, say, andersen's points-to, which
> is N^3 worst case, but linear with good implementations in practice, and
> say, "rando algorithm i came up with for DSE or something that is N^3
> because i don't feel like implementing one of the good ones"
> The former, yeah, we do what we can
> The latter, those things tend to be the problems over time.
> IE something may be better than nothing, but over time, if we don't fix it,
> that something ends up very very bad.
> If you are doing a thing, and it's a sane thing to do, and your problem is
> you just have generated tooo much code for the compiler, that is important
> to me.
> If you are doing a thing, it's crazy, or your code generator just flat out
> sucks, less important.
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
I definitely agree with both of you.
Historically we've been in a sittuation where we don't discover
compile time problems until too late. As Murphy's law strikes they
generally happen right before cutting a release, and sometimes we have
to commit hardly ideal stopgap solutions (cutoffs, disabling passes
etc..). Having more quality control over this would be definitely an
improvement compared to what we have right now. Joerg, you're the one
who reported really good compile time regressions over time (CVP and
LCSSA come to the top of my mind as I fixed one of them and reviewed
the other), so you should be really aware of the pain derived from
discovering these problems too late ;)
"There are no solved problems; there are only problems that are more
or less solved" -- Henri Poincare
More information about the llvm-dev