<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 13, 2016 at 4:23 PM, Joerg Sonnenberger via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Jan 13, 2016 at 03:38:24PM -0800, Philip Reames wrote:<br>
> I don't think that arbitrary limiting the complexity of the search is the<br>
> right approach. There are numerous ways the LVI infrastructure could be<br>
> made more memory efficient. Fixing the existing code to be memory efficient<br>
> is the right approach. Only once there's no more low hanging fruit should<br>
> we even consider clamping the search.<br>
<br>
</span>Memory efficiency is only half of the problem. I.e. groonga's expr.c<br>
needs 4m to build on my laptop, a 2.7GHz i7. That doesn't sound<br>
reasonable for a -O2. Unlike the GVN issue, the cases I have run into do<br>
finish after a(n unreasonable) while, so at least it is not trivially<br>
superlinear.<br></blockquote><div><br></div><div><br></div><div>Okay, so rather than artificially limit stuff, we should see if we can fix the efficiency of the algorithms.</div><div><br></div><div>CVP is an O(N*lattice height) pass problem. It sounds like it is being more than that for you.</div><div><br></div><div><br></div><div><br></div></div></div></div>