[llvm-dev] Writing loop transformations on the right representation is more productive

Michael Kruse via llvm-dev llvm-dev at lists.llvm.org
Tue Jan 21 15:53:26 PST 2020


Am Mi., 15. Jan. 2020 um 03:30 Uhr schrieb Renato Golin <rengolin at gmail.com>:
> > I see it as an advantage in respect of adoption: It can be switched on and off without affecting other parts.
>
> That's not necessarily true.
>
> If we do like Polly, it is, but then the ability to reuse code is very
> low and the time spent converting across is high. If we want to reuse,
> then we'll invariably add behavioural dependencies and disabling the
> pass may have side-effects.

This applies literally to any pass.

I think the problem of reusability is even worse for the current loop
optimization passes. We have multiple, partially
transformation-specific dependence analyses, such LoopAccessAnalysis,
DependenceInfo, LoopInterchangeLegality, etc. Another one is currently
in the works.

https://xkcd.com/927/ actually does apply here, but I also think that
pass-specific dependence analyses do not scale.


> > I'd put loop optimizations earlier into the pipeline than vectorization. Where exactly is a phase ordering problem. I'd want to at least preserve multi-dimensional subscripts. Fortunately MemRef is a core MLIR construct and unlikely to be lowered before lowering to another representation (likely LLVM-IR).
>
> Many front-ends do that even before lowering to IR because of the
> richer semantics of the AST, but it's also common for that to
> introduce bugs down the line (don't want to name any proprietary
> front-ends here).

This is a problem for any intermediate representation. But isn't that
also the point of MLIR? To be ably to express higher-level language
concepts in the IR as dialects? This as well might introduce bugs.

One example is the lowering of multi-dimensional arrays from Clang's
AST to LLVM-IR. We can argue whether C/C++ spec would allow
GetElementPtr to be emitted with "inrange" modifier, but for VLAs, we
cannot even express them in the IR, so we had an RFC to change that.

I don't find the argument "there might be bugs" very convincing.


> > I don't see how this is relevant for a Clang-based pipeline. Other languages likely need a different pipeline than one intended for C/C++ code.
>
> Yes, but we want our passes to work for all languages and be less
> dependent on how well they lower their code.
>
> If they do it well, awesome. If not, and if we can identify patterns
> in LLVM IR then there is no reason not to.

This was relevant to the discussion that /all/ front-ends would have
to generate good-enough annotations for loop transformations. Only the
ones that do might enable loop optimization passes.

Generally, I'd try to to make it easy for other front-end to have loop
optimizations. For instance, avoid isa<LoadInst> in favor of a more
generic "mayReadFromMemory" in analysis/transformation phases.


Michael


More information about the llvm-dev mailing list