[llvm-dev] [RFC] Interprocedural MIR-level outlining pass

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Thu Sep 1 07:20:09 PDT 2016


:-)

I was actually about to just post the following -

Any complete code folding algorithm should be able to determine the
following functions can all be folded together - i'm just using full
functions because it's easier as they only have one output, feel free to
make them partial.
All of these provably compute the same thing (assume proper data types,
blah blah blah).

1.

a = b + c
d = e + f
return a + d

2.

a = b + c
d = e + f
g = a + 0
h = d + 0
return g + h

3.

a = b + c
d = e + f
g = a + 1
h = d - 1
return g + h

4.

if (arg1)
{
  a = b + c
  d = e + f
}
else
{
  a = e + f
  d = b + c
}
return a + d

5.

data[] = {e, f, b, c};
for (i = 0; i < 4; ++i)
 sum += data[i];
return sum;

etc

You can go on forever.

I cannot see how it is possible to make a complete code folding algorithm
that catches even the above cases, and is not based on a VN algorithm to
either generate dags that look the same for each function / part of the
function, or determine that two dags that don't look like they compute the
same operation, do in fact, compute the same operation.
(or to canonicalize the function itself).

Maybe it's possible. I'm definitely not smart enough to see it :)

Can you make code folding algorithms that get less? Sure.
Might you want to do that anyway because it's easier/faster/whatever?
Again, sure.

But at some point, as your thing is extended to catch more and more cases,
i do not see how it does not either rely on a complete VN analysis, or
become a complete VN analysis itself.

On Thu, Sep 1, 2016 at 4:08 AM, Andrey Bokhanko <andreybokhanko at gmail.com>
wrote:

> Hi Daniel,
>
> Consider me convinced (not sure you care, but still... :-))
>
> What confused me is that this is not VN in a traditional sense -- it's
> more like using VN's infrastructure to compute something different. But one
> can use VN's code for this, indeed.
>
> Thank you for the explanation!
>
> Yours,
> Andrey
>
>
> On Thu, Sep 1, 2016 at 4:24 AM, Daniel Berlin <dberlin at dberlin.org> wrote:
>
>>
>>
>> On Wed, Aug 31, 2016 at 5:36 PM, Hal Finkel <hfinkel at anl.gov> wrote:
>>
>>>
>>> ------------------------------
>>>
>>> *From: *"Daniel Berlin" <dberlin at dberlin.org>
>>> *To: *"Hal Finkel" <hfinkel at anl.gov>
>>> *Cc: *"Andrey Bokhanko" <andreybokhanko at gmail.com>, "llvm-dev" <
>>> llvm-dev at lists.llvm.org>
>>> *Sent: *Wednesday, August 31, 2016 7:02:57 PM
>>> *Subject: *Re: [llvm-dev] [RFC] Interprocedural MIR-level outlining pass
>>>
>>> (and in particular, the definition of equivalence used by code folding
>>> to make the dags is STH like  "two VNDAG expressions are equivalent if
>>> their operands come from VNDAG expressions with the same opcode")
>>>
>>> Thus,
>>>
>>> VN2 = VN0 + VN1
>>> VN3 = VN1 + VN2
>>>
>>> is considered equivalent to
>>>
>>> VN2 = VN0 + VN5
>>> VN3 = VN1 + VN2
>>>
>>> Despite the fact that this is completely illegal for straight redundancy
>>> elimination.
>>>
>>>
>>> But again, as I said if y'all want to make a pass that basically
>>> generates a new type of expression DAG, have fun :)
>>>
>>> I don't think that anyone wants to do anything unnecessary, or re-invent
>>> any wheels. I'm just trying to understand what you're saying.
>>>
>>> Regarding you example above, I certainly see how you might do this
>>> simply by defining an equivalence class over all "external" inputs. I don't
>>> think this is sufficient, however, for what is needed here. The problem is
>>> that we need to recognize maximal structurally-similar subgraphs, and I
>>> don't see how what you're proposing does that (even if you have some scheme
>>> where you don't hash every part of each instruction). Maybe if you had some
>>> stream-based similarity-preserving hash function, but that's another matter.
>>>
>>>
>>
>>>
>>
>>
>>> Now, what I want to recognize is that the latter two instructions in
>>> each group are structurally similar. Even if I define an equivalence class
>>> over external inputs, in this case, E = { a, b, c, d, e, f, g, h}, then I
>>> have:
>>>
>>> q = E + E
>>> r = E + E
>>> s = q + r
>>>
>> t = q - r
>>>
>>> Doing literally nothing but building a standard VN dag (IE with normal
>> equivalence),
>> The dag here will contain:
>>
>> V1 = VN Expression of E + E
>> V2 = V1
>> s = VN expression of V1 + V2 (you can substitute V1 if you like or not)
>> x = VN expression of V1 - V2 (ditto)
>>
>> and:
>>>
>>> u = E + E
>>> v = E - E
>>> w = u + v
>>> x = u - v
>>>
>>
>>
>> The dag here would contain
>> V1 = E + E
>> V2 = E - E
>> w = VN expression of V1 + V2
>> x = VN expression of V1 - V2
>>
>>
>> So what's the issue you are worried about here, precisely?
>>
>> If you extend this example to be longer, the answer does not change.
>>
>> If you make it so one example is longer than the other, you can still
>> discover this by normal graph isomorphism algorithms.
>> If you literally only care about the operation being performed, you can
>> make that work too.
>>
>>
>>
>> but then r and v are still different, so how to we find that {s, t} can
>>> be abstracted into a function, because we've found the isomorphism { s ->
>>> w, t -> x }, and then thus reuse that function to evaluate {w, x}?
>>>
>>
>> So let's back up. In your scheme, how do you think you do that now,
>> precisely?
>> What is the algorithm you use to discover the property you are trying to
>> find?
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160901/487dea3a/attachment-0001.html>


More information about the llvm-dev mailing list