[llvm-dev] Writing loop transformations on the right representation is more productive
Nicolai Hähnle via llvm-dev
llvm-dev at lists.llvm.org
Mon Feb 10 12:52:27 PST 2020
On Sat, Feb 8, 2020 at 7:11 AM Michael Kruse via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> I am not sure that a pass manager is that necessary. It adds flexibility to the optimization process, but as long we don't support dynamically adding transformations, I think organizing the transformations in code is sufficient at the beginning. Since transformations apply on subtrees, are prioritized, and can add candidates to be selected by a cost function, a pass manager has to be structured differently.
It can be convenient to be able to just plug different pass orders
together in a tool like opt, to explore what happens with different
pass structures.
[snip]
>> 3) On the technical point design of red/green trees, my personal opinion is that this isn’t solving the important part of the problem. Red/green trees (as far as I’m aware) are a memory saving optimization.
>
> Not entirely, there would also be a run-time cost if we had to copy the complete IR before for creating a new "red" representation. If this was necessary, I think it would be a barrier to speculatively apply transformations.
Agreed.
>> However, “solving" the phase ordering issues require exploring an exponential (e.g. in depth of transformations / pass pipeline) search space problem (which is expensive in cycles, not just memory), and it relies on having a very good cost model for the generated code (which is a universally hard problem). Furthermore, on the cost model point, the performance of the code depends on the later “non loop optimizations” as well, which are not going to get changed to use this new IR.
>
> Very good point. I don't claim a red/green tree is the universal solution to this problem, but I think it makes problem space exploration significantly easier, especially when combining multiple transformations before knowing the outcome.
>
> As you point out, "non-loop optimizations" will be necessary after loop optimizations as well (same with LLVM's current loop optimizations: for instance, undo LCSAA) such as new InstCombine opportunities after loop fusion. This can be difficult to predict by a cost function, but they are usually less significant than what can be gained by a loop transformations. Others, such as LICM, can be significant though and should be considered by a cost function (or simply applied to the red/green tree before evaluating the cost function to be automatically be considered).
I've actually seen a bunch of code with loops that would traditionally
be considered quite weird, where loop unrolling unlocks a huge number
of InstCombine and even SimplifyCFG opportunities to the point where
after loop unrolling, the code is barely larger than the original
code. So being able to speculatively combine not just loop passes but
also generic IR passes is definitely interesting.
Cheers,
Nicolai
--
Lerne, wie die Welt wirklich ist,
aber vergiss niemals, wie sie sein sollte.
More information about the llvm-dev
mailing list