[llvm-dev] [RFC] Writing loop transformations on the right representation is more productive

Michael Kruse via llvm-dev llvm-dev at lists.llvm.org
Thu Jan 30 01:04:00 PST 2020


Am So., 26. Jan. 2020 um 21:04 Uhr schrieb Chris Lattner <
clattner at nondot.org>:

>
> On Jan 22, 2020, at 12:58 AM, Michael Kruse <llvmdev at meinersbur.de> wrote:
> Am Mi., 15. Jan. 2020 um 20:27 Uhr schrieb Chris Lattner <
> clattner at nondot.org>:
>
>> One you achieve consensus on data structure, there is the question of
>>> what IR to use within it.  I would recommend starting with some combination
>>> of “existing LLVM IR operations + high level control flow representation”,
>>> e.g. parallel and affine loops.  The key here is that you need to always be
>>> able to lower in a simple and predictable way to LLVM IR (this is one of
>>> the things that classic polyhedral systems did sub optimally, making it
>>> difficult to reason about the cost model of various transformations), and
>>> this is a natural incremental starting point anyway.  Over time, more high
>>> level concepts can be gradually introduced.  FYI, MLIR already has a
>>> reasonable LLVM dialect <https://mlir.llvm.org/docs/Dialects/LLVM/> and
>>> can generate LLVM IR from it, so we’d just need an “LLVM IR -> MLIR LLVM
>>> dialect” conversion, which should be straightforward to build.
>>>
>>
>> Adding a LLVM-IR -> MLIR -> LLVM-IR round-trip would at the beginning
>> just introduce compile-time overhead and what Renato described as
>> brittleness. I fear this hurts adaption.
>>
>>
>> Isn’t this true of *any* higher level IR?  Unless I’m missing something
>> big, this seems inherent to your proposal.
>>
>
> No. A loop hierarchy may be created on-demand and can be skipped if, e.g.,
> the function does not contain a loop.
>
>
> I don’t see how this is specific to a loop IR.  Any “LLVMIR -> X ->
> LLVMIR” system has the behavior you describe, whether X is polly, mlir, or
> some other loop IR; any decision about skipping the round trip could be
> applied to any of them.
>
> The advantage of MLIR in this discussion is that it has the opportunity to
> subsume LLVMIR at some point in the future, eliminating the round trip at
> that point.
>
>
As long as there is no plan for this to happen, this is speculative.


> For IRs that are translation-unit based, the entire module will have to do
> a round-trip whether changed or not.
>
>
> I think that this must be the misunderstanding.  There is no requirement
> to do a “Full LLVM IR module to MLIR module” conversion, you can convert
> one function, one loop nest, one basic block or whatever you else you’d
> want to do.
>

Either the goal is for clang to generate MLIR translation units, or to have
per-function round-trips. However, I do see that per-function round-trips
could be an intermediate state.

However, for sub-function units, one somehow has to keep the reference to
the llvm::Value objects declared outside the extracted union so they can be
referenced again when generating LLVM-IR again. Possible with a
DenseMap<mlir::Operation,llvm::Instruction>, but sounds fragile. Also,
these llvm::Instructions must be represented as an mlir::Operation somehow.



>
> This is pretty straight-forward (in principle, I haven’t actually built a
> system to prove this), because you can just have an operation with regions
> for each version, e.g.:
>
> for {
>   op1
>   versioned {
>     stuff
>   } {
>     other stuff
>   }
>   op2
> }
>
> Then transform the code and select which version you want later.
>
> MLIR doesn’t magically make the algorithms happen for you, but it does
> handle the representational issues, making it super flexible and easy to
> support things like this.
>
>
What I proposed is much more than the possibility to represent multiple
versions in the same IR (assuming that we want to keep just one of the
versions, these additional ones are visible in use-def chains and possibly
confusing some analyses. It might also be a problem for convergent
instructions).

I would like to stress that multi-versioning is just one of the
possibilities enabled by cheap copies (i.e. Red/Green trees) and I remain
convinced that cheap copies are not possible with a
densely-coupled representation such as MLIR, LLVM-IR and VPlan (unless we
separate the in-memory representation from the logical data structure).

Other thinks enabled by cheap copies:
 * Apply transformation first, then check legality and profitability
 * Speculative transformations / Profitability can be checked of a chain of
operations
 * Canonicalization that remove dependencies (e.g. moving computations into
the loops)
 * Lazy expansion of nodes (e.g. instructions outside of loops do not need
to be represented)
 * Cheap representation of unrolling
 * Code versioning

Your suggestion above still requires copying the entire "stuff" and "other
stuff" subtrees. Not rarely do e.g. stencil-loops have thousands of
instructions and all would need to be copied for each version. This is a
practical barrier for the applications above because of the compile time
increase.

There is an analogy in file-systems: Btrfs and ZFS support copy-on-write of
files and directory, enabling new applications such as snapshots. Without
the copy-on-write feature, a snapshot would have to copy the entire
filesystem. Possible, but not really feasible.


> Indeed it is easier to not lower these constructs, but not impossible (as
> shown in https://reviews.llvm.org/D69930). I think the relevant
> difference is that these constructs come with additional guarantees (e.g.
> Single-Entry-Single-Exit regions) and optimization hurdles (e.g. thread
> synchronization; where programmers do not expect the compiler to do a lot
> of things) compared to C++ loop constructs.
>
>
> There is a dangerous implication in the way you phrased this, and while it
> may not have been intentional,  is something I think is important to point
> out.   The whole point of good IR design is to make certain things *easy*.
> IR design rarely makes things “possible”, and so saying something is “not
> impossible” is a dangerous ground to stand on.
>

I cannot follow the deduction from "less easy" to "dangerous".

Anyway, this is also the argument I want to make: A loop-only
representation can simplify implementing a loop transformations by a lot.
Instead of each transformation having to implement its own legality check
and profitability heuristics (often requiring the most lines of code), they
all can share the same implementation.

Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200130/170947ea/attachment.html>


More information about the llvm-dev mailing list