[llvm-dev] Loop identification
Mehdi Amini via llvm-dev
llvm-dev at lists.llvm.org
Fri Jan 13 09:25:08 PST 2017
> On Jan 13, 2017, at 9:13 AM, Krzysztof Parzyszek via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> On 1/13/2017 10:52 AM, Hal Finkel wrote:
>> On 01/13/2017 10:19 AM, Krzysztof Parzyszek via llvm-dev wrote:
>>>
>>> If LLVM supported adding such target-specific passes at that point in
>>> the optimization pipeline, you could just write your own pass and plug
>>> it in there.
>>
>> This certainly seems like a reasonable thing to support, but the
>> question is: Why should your pass run early in the mid-level optimizer
>> (i.e. in the part of the pipeline we generally consider
>> canonicalization) instead of as an early IR pass in the backend? Adding
>> IR-level passes early in the backend is well supported. There are plenty
>> of potential answers here for why earlier is better (e.g. affecting
>> inlining decisions, idioms might be significantly more difficult to
>> recognize after vectorization, etc.) but I think we need to discuss the
>> use case.
>
> The reason is that the idiom code may end up looking different each time one of the preceding optimization is changed. Also, some of the optimizations (instruction combiner, for example) have a tendency to greatly obfuscate the code
This seems quite contradictory with what I was constantly told about the goal of inst-combine, i.e. it is supposed to canonicalize the IR to make it easier for further passes to recognize patterns.
> , making it really hard to extract useful data from the idiom code. It is not always enough to simply recognize a pattern, but to replace it with an intrinsic some additional parameters may need to be obtained from the initial code. When the code is mangled by the combiner, this process may be a lot harder. Also, combiner is one of those things that change quite often. For recognizing loop idioms, the loop optimizations may be the main problem. The idiom code may end up getting unrolled, rotated, or otherwise rendered unrecognizable.
That part (loop optimization) was acknowledged in Hal answer IIUC.
—
Mehdi
More information about the llvm-dev
mailing list