<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 01/13/2016 04:28 PM, Daniel Berlin
via llvm-dev wrote:<br>
</div>
<blockquote
cite="mid:CAF4BwTVT-vo+eUwisTyLDQhFMfU4aSOXSiTZV8kwnyMykRi+tA@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Jan 13, 2016 at 4:23 PM,
Joerg Sonnenberger via llvm-dev <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:llvm-dev@lists.llvm.org" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><span
class="">On Wed, Jan 13, 2016 at 03:38:24PM -0800,
Philip Reames wrote:<br>
> I don't think that arbitrary limiting the
complexity of the search is the<br>
> right approach. There are numerous ways the LVI
infrastructure could be<br>
> made more memory efficient. Fixing the existing
code to be memory efficient<br>
> is the right approach. Only once there's no more
low hanging fruit should<br>
> we even consider clamping the search.<br>
<br>
</span>Memory efficiency is only half of the problem. I.e.
groonga's expr.c<br>
needs 4m to build on my laptop, a 2.7GHz i7. That doesn't
sound<br>
reasonable for a -O2. Unlike the GVN issue, the cases I
have run into do<br>
finish after a(n unreasonable) while, so at least it is
not trivially<br>
superlinear.<br>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div>Okay, so rather than artificially limit stuff, we
should see if we can fix the efficiency of the algorithms.</div>
<div><br>
</div>
<div>CVP is an O(N*lattice height) pass problem. It sounds
like it is being more than that for you.</div>
</div>
</div>
</div>
</blockquote>
(Deliberately replying up thread to skip detailed discussion of the
lattice.)<br>
<br>
We have had bugs in the past which causes us to move back up the
lattice. The most likely cause of long running time(*) is an
accidental reversal in the lattice. Note that we move up the
lattice intentionally in some cases (specifically around assumes)
when intersecting facts from different sources. <br>
<br>
(*) Assuming we're talking about an algorithmic issue and not simply
poor implementation quality. We're definitely more wasteful about
memory than we need to be for instance. Depending on the caching
behavior of the algorithm, this could appear super linear in running
time. <br>
<br>
If you have found a case that involves many steps for each variable
in the lattice, one principled fix would be to bound the number of
constant range refinements allowed. e.g. If you updated the
constant range 5 times, drop to overdefined. <br>
<br>
Philip<br>
<br>
</body>
</html>