<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 24, 2017 at 7:59 PM, Matthias Braun via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Apr 24, 2017, at 5:30 PM, Joerg Sonnenberger via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br>
><br>
> On Mon, Apr 24, 2017 at 11:06:36AM -0700, Matthias Braun via llvm-dev wrote:<br>
>> Would be cool to create a suite of extreme inputs, maybe a special llvm<br>
>> test-suite module. This module would contain scripts that produce<br>
>> extreme inputs (long basic blocks, deeply nested loops,<br>
>> utils/create_ladder_graph.py, etc.) In fact I have a python script here<br>
>> as well that generates a few variations of stuff that was interesting<br>
>> to scheduling algos. It would just take someone to setup a proper<br>
>> test-suite and a bot for it and I'd happily contribute more tests :)<br>
><br>
> Well, I find limited motivation to "optimise" the compiler for artifical<br>
> code. Trying to identify real world code that triggers expensive<br>
> behavior is a much more useful exercise, IMO.<br>
<br>
</span>Well my practical experience so far has been that the function size for most compiler inputs stays at a reasonable/smallish compared to how much memory most computers have. I assume the humans that write that code can only handle a per-function complexity up to a certain level.<br></blockquote><div><br></div><div>Mostly :)</div><div>But in practice, humans end up triggering the limits most of the time, it just takes longer.</div><div><br></div><div>I've rarely seen an N^3 or worse algorithm in a compiler that didn't end up a problem over time for real code.</div><div><br></div><div>We aren't usually talking about things that are fundamentally impossible in faster time bounds.</div><div>We are talking about things that we've skimped on the implementation for various reasons (sometimes very good!) and decided not to do faster.</div><div><br></div><div>IE to me there is a big difference between, say, andersen's points-to, which is N^3 worst case, but linear with good implementations in practice, and say, "rando algorithm i came up with for DSE or something that is N^3 because i don't feel like implementing one of the good ones"</div><div>The former, yeah, we do what we can</div><div>The latter, those things tend to be the problems over time.</div><div><br></div><div>IE something may be better than nothing, but over time, if we don't fix it, that something ends up very very bad.</div><div><br></div><div>If you are doing a thing, and it's a sane thing to do, and your problem is you just have generated tooo much code for the compiler, that is important to me.</div><div>If you are doing a thing, it's crazy, or your code generator just flat out sucks, less important.</div><div><br></div><div><br></div><div><br></div></div></div></div>