[llvm-dev] High memory use and LVI/Correlated Value Propagation
Hal Finkel via llvm-dev
llvm-dev at lists.llvm.org
Thu Jan 14 10:32:31 PST 2016
----- Original Message -----
> From: "Daniel Berlin via llvm-dev" <llvm-dev at lists.llvm.org>
> To: "Joerg Sonnenberger" <joerg at britannica.bec.de>, "llvm-dev"
> <llvm-dev at lists.llvm.org>
> Sent: Thursday, January 14, 2016 10:38:10 AM
> Subject: Re: [llvm-dev] High memory use and LVI/Correlated Value
> Propagation
> On Thu, Jan 14, 2016 at 5:57 AM, Joerg Sonnenberger via llvm-dev <
> llvm-dev at lists.llvm.org > wrote:
> > On Wed, Jan 13, 2016 at 04:28:03PM -0800, Daniel Berlin wrote:
>
> > > On Wed, Jan 13, 2016 at 4:23 PM, Joerg Sonnenberger via llvm-dev
> > > <
>
> > > llvm-dev at lists.llvm.org > wrote:
>
> > >
>
> > > > On Wed, Jan 13, 2016 at 03:38:24PM -0800, Philip Reames wrote:
>
> > > > > I don't think that arbitrary limiting the complexity of the
> > > > > search is the
>
> > > > > right approach. There are numerous ways the LVI
> > > > > infrastructure
> > > > > could be
>
> > > > > made more memory efficient. Fixing the existing code to be
> > > > > memory
>
> > > > efficient
>
> > > > > is the right approach. Only once there's no more low hanging
> > > > > fruit
>
> > > > should
>
> > > > > we even consider clamping the search.
>
> > > >
>
> > > > Memory efficiency is only half of the problem. I.e. groonga's
> > > > expr.c
>
> > > > needs 4m to build on my laptop, a 2.7GHz i7. That doesn't sound
>
> > > > reasonable for a -O2. Unlike the GVN issue, the cases I have
> > > > run
> > > > into do
>
> > > > finish after a(n unreasonable) while, so at least it is not
> > > > trivially
>
> > > > superlinear.
>
> > > >
>
> > >
>
> > >
>
> > > Okay, so rather than artificially limit stuff, we should see if
> > > we
> > > can fix
>
> > > the efficiency of the algorithms.
>
> > >
>
> > > CVP is an O(N*lattice height) pass problem. It sounds like it is
> > > being more
>
> > > than that for you.
>
> > I assume you mean something like #BB * #variables?
>
> No.
> ;)
> I mean the theoretical complexity of performing the optimization CVP
> performs (regardless of how it is currently implemented).
> What CVP does is doable in linear time on number of variables *
> lattice height. It has some memory cost to do this.
> Assume for a second the lattice height is fixed (IE we do not have
> completely symbolic ranges).
> It is possible, in linear time, to build an SSA-like form for the
> ranges it is computing.
> It is then possible, in linear time, to propagate those ranges to all
> affected variables.
> This is O(N) + O(N).
> Doing so may change the ranges (they only get *better*, however).
> So you may need to do this until they fall all the way down the
> lattice.
> This means you may repeat this lattice height number of times.
What is the height of the lattice here? I understand that it goes from top -> defined -> overdefined, and that's three, but defined is not itself one state. Because it is tracking integer ranges, those ranges could undergo 2^#bits refinements in the worst case (thus making the lattice height quite large), no?
Thanks again,
Hal
> CVP as currently implemented tries to be very lazy. The cost of being
> very lazy is that it has a worse theoretical complexity in the worst
> case, but is faster in cases there is not a lot to do.
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
--
--
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory
More information about the llvm-dev
mailing list