[llvm-dev] High memory use and LVI/Correlated Value Propagation
Joerg Sonnenberger via llvm-dev
llvm-dev at lists.llvm.org
Thu Jan 14 05:57:58 PST 2016
On Wed, Jan 13, 2016 at 04:28:03PM -0800, Daniel Berlin wrote:
> On Wed, Jan 13, 2016 at 4:23 PM, Joerg Sonnenberger via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
> > On Wed, Jan 13, 2016 at 03:38:24PM -0800, Philip Reames wrote:
> > > I don't think that arbitrary limiting the complexity of the search is the
> > > right approach. There are numerous ways the LVI infrastructure could be
> > > made more memory efficient. Fixing the existing code to be memory
> > efficient
> > > is the right approach. Only once there's no more low hanging fruit
> > should
> > > we even consider clamping the search.
> >
> > Memory efficiency is only half of the problem. I.e. groonga's expr.c
> > needs 4m to build on my laptop, a 2.7GHz i7. That doesn't sound
> > reasonable for a -O2. Unlike the GVN issue, the cases I have run into do
> > finish after a(n unreasonable) while, so at least it is not trivially
> > superlinear.
> >
>
>
> Okay, so rather than artificially limit stuff, we should see if we can fix
> the efficiency of the algorithms.
>
> CVP is an O(N*lattice height) pass problem. It sounds like it is being more
> than that for you.
I assume you mean something like #BB * #variables? The instances I have
seen are all very large functions with many branches. Consider it from
this perspective: there is currently only one hammer for controlling the
amount of memory/CPU time CVP will use from clang -- -O0 vs -O2. I
believe that -O2 should provide a result in a reasonable amount of time
and with a reasonable amount of memory. The 2GB clamp (and 1h of CPU
time) for a 64bit clang are similar to the limits for a native build on
a 32bit architecture, so they are not arbitrary constraints.
Joerg
More information about the llvm-dev
mailing list