[llvm-dev] Writing loop transformations on the right representation is more productive

Michael Kruse via llvm-dev llvm-dev at lists.llvm.org
Thu Jan 30 01:05:46 PST 2020


Am Mo., 27. Jan. 2020 um 22:06 Uhr schrieb Uday Kumar Reddy Bondhugula <
uday at polymagelabs.com>:

> Hi Michael,
>
> Although the approach to use a higher order in-memory abstraction like the
> loop tree will make it easier than what you have today, if you used MLIR
> for this representation, you already get a round trippable textual format
> that is *very close* to your form. The affine.for/if, std.for/if in MLIR
> are nearly isomorphic to the tree representation you want, and as such,
> this drastically reduces the delta between the in-memory data structures
> your passes want to operate on and what you see when you print the IR.
> Normally, there'd be resistance to building a textual form / introducing a
> first class concept in the IR for what you are proposing, but since this
> was already done for MLIR, it looks like it would be a big win from a
> compiler developers' productivity standpoint if you just used MLIR for this
> loop tree representation. With regard to the FAQ, I can't tell whether you
> meant something else or missed the representation used in MLIR for the
> affine dialect or in general for "ops with regions".
>

The point of the proposal is not having a first-class construct for loops,
but to allow speculative transformations. Please my response to Chris
Lattner:
https://lists.llvm.org/pipermail/llvm-dev/2020-January/138778.html


> > Q: Relation to MLIR?
> > A: MLIR is more similar to LLVM-IR than a loop hierarchy. For
>
> It's completely the opposite unless you are looking only at MLIR's std
> dialect! The affine dialect as well as the std.for/if (currently misnamed
> as loop.for/loop.if) are actually a loop tree. The affine ops are just an
> affine loop AST isomorphic to the materialization of polyhedral
> domains/schedules via code generation. Every IST AST or the output of
> polyhedral code generation can be represented in the affine dialect and
> vice versa. MLIR's loop/if ops are a hierarchy rather than a flat form /
> list of blocks CFG.
>

As per my discussion with Chris Lattner, this is a very subjective
question. It might be controversial, but I don't see MLIR regions as much
more than syntactic sugar for inlined function calls that allow referencing
the outer regions' definitions. This does not mean that I think they are
they are useless, on the contrary.

Regarding the affine dialect, I see the same problem that Polly has when
creating a schedule tree representation: A lot of work has to be done to
make IR originating from Clang compatible. Everything becomes easy if the
front-end can generate an affine dialect out-of-the box.



> > still have to be rediscovered. However, a loop hierarchy optimizer
> > could be applied to MLIR just as well as to LLVM-IR.
>
> This is true, but it's easier to apply it to MLIR because the actual IR is
> close by miles to the in-memory thing your loop hierarchy optimizer would
> be using. For eg., here's the input IR and the output IR of a simple outer
> loop vectorization performed in MLIR:
>
> https://github.com/bondhugula/llvm-project/blob/hop/mlir/test/Transforms/vectorize.mlir#L23
>
>
Again, the proposal is about the in-memory representation using red/green
trees (which I firmly disagree with being close to MLIR's in-memory
representation), not the textual format.


Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200130/f95bb574/attachment.html>


More information about the llvm-dev mailing list