[llvm-dev] High memory use and LVI/Correlated Value Propagation
Philip Reames via llvm-dev
llvm-dev at lists.llvm.org
Thu Jan 14 14:46:40 PST 2016
On 01/14/2016 02:26 PM, Hal Finkel wrote:
> ----- Original Message -----
>> From: "Philip Reames via llvm-dev" <llvm-dev at lists.llvm.org>
>> To: "Daniel Berlin" <dberlin at dberlin.org>, "Joerg Sonnenberger" <joerg at britannica.bec.de>, "llvm-dev"
>> <llvm-dev at lists.llvm.org>
>> Sent: Thursday, January 14, 2016 3:26:02 PM
>> Subject: Re: [llvm-dev] High memory use and LVI/Correlated Value Propagation
>>
>>
>>
>>
>>
>> On 01/13/2016 04:28 PM, Daniel Berlin via llvm-dev wrote:
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jan 13, 2016 at 4:23 PM, Joerg Sonnenberger via llvm-dev <
>> llvm-dev at lists.llvm.org > wrote:
>>
>>
>> On Wed, Jan 13, 2016 at 03:38:24PM -0800, Philip Reames wrote:
>>> I don't think that arbitrary limiting the complexity of the search
>>> is the
>>> right approach. There are numerous ways the LVI infrastructure
>>> could be
>>> made more memory efficient. Fixing the existing code to be memory
>>> efficient
>>> is the right approach. Only once there's no more low hanging fruit
>>> should
>>> we even consider clamping the search.
>> Memory efficiency is only half of the problem. I.e. groonga's expr.c
>> needs 4m to build on my laptop, a 2.7GHz i7. That doesn't sound
>> reasonable for a -O2. Unlike the GVN issue, the cases I have run into
>> do
>> finish after a(n unreasonable) while, so at least it is not trivially
>> superlinear.
>>
>>
>>
>>
>>
>> Okay, so rather than artificially limit stuff, we should see if we
>> can fix the efficiency of the algorithms.
>>
>>
>> CVP is an O(N*lattice height) pass problem. It sounds like it is
>> being more than that for you. (Deliberately replying up thread to
>> skip detailed discussion of the lattice.)
>>
>> We have had bugs in the past which causes us to move back up the
>> lattice. The most likely cause of long running time(*) is an
>> accidental reversal in the lattice. Note that we move up the lattice
>> intentionally in some cases (specifically around assumes) when
>> intersecting facts from different sources.
>>
>> (*) Assuming we're talking about an algorithmic issue and not simply
>> poor implementation quality. We're definitely more wasteful about
>> memory than we need to be for instance. Depending on the caching
>> behavior of the algorithm, this could appear super linear in running
>> time.
>>
>> If you have found a case that involves many steps for each variable
>> in the lattice, one principled fix would be to bound the number of
>> constant range refinements allowed. e.g. If you updated the constant
>> range 5 times, drop to overdefined.
> I agree, although it might be nice only to limit the number of refinements across backedges (because, if I'm thinking about this correctly, that's the only way to really get any actually-bad behavior). If there are no backedges, then the number of refinements is limited by the number of instructions in any case.
I'd have to double check, but I doubt believe LVI is optimistic about
loops at all. I believe it will try to look at each input recursively,
special case the case where it encounters the query block while doing
so, but otherwise immediately fall to overdefined.
>
> -Hal
>
>> Philip
>>
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
More information about the llvm-dev
mailing list