[llvm-dev] Writing loop transformations on the right representation is more productive

Nicolas Vasilache via llvm-dev llvm-dev at lists.llvm.org
Mon Feb 10 09:52:46 PST 2020


Albert, please loop me in too in your discussions (haha..), I'll be out the
whole of next week though.


On Mon, Feb 10, 2020 at 10:50 AM Albert Cohen <albertcohen at google.com>
wrote:

> Thanks Chris for CCing us.
>
> I remember Michael's presentation and suggestion to consider Roslyn's
> design and experience. I'll be glad to discuss further in April. Michael,
> we can also talk later this week if you'd like. I'll send you a separate
> email.
>
> Loop transformations in MLIR take many different paths. Polyhedral
> affine-to-affine, generative/lowering in linalg, and also exploring lifting
> lower level constructs into affine and further into linalg and tensor
> compute ops. I'm all for exchanging on the rationale, use cases, and design
> of these paths, alongside with LLVM LNO. One practical option being to
> compose these lifting, affine transformations and lowering steps to build
> an LLVM LNO. Ideally this could be one generic effort that would also
> interop with numerical libraries or specialized hardware ops where they
> exist, and that can be used to implement domain-specific code generators.
>
> More on Roslyn: I'm not convinced yet about the added value of red-green
> trees. I see them as an implementation detail at the moment: much like sea
> of nodes is a convenient abstraction for implementing SSA-based global
> optimization passes, red-green trees may improve on the practice of
> AST/loop-nest transformations, but I don't see much fundamental or solid
> engineering benefits... yet.
>
> Albert
>
>
>
> On Sat, Feb 8, 2020 at 6:11 PM Chris Lattner <clattner at nondot.org> wrote:
>
>>
>>
>> On Feb 7, 2020, at 10:16 PM, Michael Kruse <llvmdev at meinersbur.de> wrote:
>>
>> Am Fr., 7. Feb. 2020 um 17:03 Uhr schrieb Chris Lattner <
>> clattner at nondot.org>:
>>
>>> > The discussion here is valuable for me, helping me to make my
>>> > presentation about it at EuroLLVM as relevant as possible. My current
>>> > idea is to take a complex loop nest, and compare optimizing it using
>>> > red/green DAGs and traditional pass-based optimizers.
>>>
>>> Cool.  I’d really recommend you connect with some of the loop
>>> optimization people working on MLIR to learn more about what they are
>>> doing, because it is directly related to this and I’d love for there to be
>>> more communication.
>>>
>>> I’ve cc'd Nicolas Vasilache, Uday Bondhugula, and Albert Cohen as
>>> examples that would be great to connect with.
>>>
>>
>> You may have already seen my discussion with Uday on the mailing list. I
>> would like to discuss approaches with all 3 of them, at latest at EuroLLVM
>> (or contact be before that, e.g. on this mailing-list thread).
>>
>>
>> Great!  I’d be happy to join a discussion about this at EuroLLVM too.  I
>> think it is really important to improve the loop optimizer in LLVM, and I’m
>> glad you’re pushing on it!
>>
>> -Chris
>>
>>

-- 
N
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200210/94a08137/attachment.html>


More information about the llvm-dev mailing list