<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
<br>
<div class="moz-cite-prefix">On 3/3/20 8:37 PM, Chris Lattner wrote:<br>
</div>
<blockquote type="cite"
cite="mid:81CC1FF9-B9DC-499F-8898-A4C300302431@nondot.org">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
On Feb 28, 2020, at 6:03 PM, Chris Lattner <<a
href="mailto:clattner@nondot.org" class=""
moz-do-not-send="true">clattner@nondot.org</a>> wrote:<br
class="">
<div>
<blockquote type="cite" class=""><br
class="Apple-interchange-newline">
<div class="">
<div class="">On Feb 28, 2020, at 8:56 AM, Johannes Doerfert
<<a href="mailto:johannesdoerfert@gmail.com" class=""
moz-do-not-send="true">johannesdoerfert@gmail.com</a>>
wrote:<br class="">
<blockquote type="cite" class="">On 02/28, Nicholas Krause
via llvm-dev wrote:<br class="">
<blockquote type="cite" class="">Anyhow what is the
status and what parts are we planning to move to<br
class="">
MLIR in LLVM/Clang. I've not seen any discussion on
that other than<br class="">
starting to plan for it. <br class="">
</blockquote>
<br class="">
As far as I know, there is no (detailed/discussed/agreed
upon/...) plan<br class="">
to move any existing functionality in LLVM-Core or Clang
to MLIR. There<br class="">
are some people that expressed interest in there is
Chris's plan on how<br class="">
the transition could look like.<br class="">
</blockquote>
<br class="">
Yep, agreed, I gave a talk a couple days ago (with
Tatiana) with a proposed path forward, but explained it as
one possible path. We’ll share the slides publicly in a
few days after a couple things get taken care of.<br
class="">
</div>
</div>
</blockquote>
</div>
<br class="">
<div class="">Hi all,</div>
<div class=""><br class="">
</div>
<div class="">Here is a <a
href="https://docs.google.com/presentation/d/11-VjSNNNJoRhPlLxFgvtb909it1WNdxTnQFipryfAPU/edit#slide=id.g7d334b12e5_0_4"
class="" moz-do-not-send="true">link to the CGO presentation
slides</a> (outlining a possible path to incremental adoption
of MLIR in Clang) for anyone curious.</div>
<div class=""><br class="">
</div>
<div class="">-Chris</div>
</blockquote>
Greetings, <br>
As to David Blaike's suggestion I'm merging the two threads for this
discussion. The original commenters is Johannes Doefert<br>
starting with Hey,:<br>
<blockquote type="cite">
<div dir="auto">Hey,
<div dir="auto"><br>
</div>
<div dir="auto">Apologies for the wait, everything right now is
going crazy..</div>
</div>
</blockquote>
Compiler Folks are very busy people as there aren't as much of us
unfortunately so no need to <br>
apologize. I've yet to heard from someone on the GCC side and will
wait until after GCC 11<br>
is released due to this. Also not to mention the health issues of
Coronavirus-19.<br>
<blockquote type="cite">
<div dir="auto">
<div dir="auto"><br>
</div>
<div dir="auto">I think we should early in move this
conversation on the llvm Dev list but generally speaking we
can see three options here:</div>
<div dir="auto">1) parallelize single passes or a subset of
passes that are known to not interfer, e.g. the attributor,</div>
<div dir="auto">2) parallelize analysis pass execution before a
transformation that needs them,</div>
</div>
</blockquote>
<blockquote type="cite">
<div dir="auto">3) investigate what needs to be done for a
parallel execution of many passes, e.g. How can we avoid races
on shared structure such as the constant pool. </div>
</blockquote>
I was researching this on and off for the last few months in terms
of figuring out how to make the pass manager itself async. Its not
easy and I'm not even<br>
sure if that's possible. Not sure about GIMPLE as I would have to
ask the middle end maintainer on the GCC side but LLVM IR does not
seem to have shared<br>
state detection or the core classes and same for the back ends. So
yes this would interest me. <br>
<br>
The first place to start with is which data structures are shared
for sure. The biggest ones seem to be basic blocks and function
definitions in terms of shared state, as <br>
those would be shared by passes running on each function. We should
start looking at implementing here locks or ref counting here first
if your OK with that. <br>
It also allows me to understand a little more concrete the linkage
between the core classes as would be required for multi threading
LLVM. In addition,<br>
it allows us to look into partitioning issues with threads at the
same thing in terms of how to do it.<br>
<div><br>
As was discussed on the previous thread - generally the assumption
is that one wouldn't try to run two function optimizations on the
same function at the same time, but, for instance - run function
optimizations on unrelated functions at the same time (or CGSCC
passes on distinct CGSCCs). But this is difficult in LLVM IR
because use lists are shared - so if two functions use the same
global variable or call the same 3rd function, optimizing out a
function call from each of those functions becomes a write to
shared state when trying to update the use list of that 3rd
function. MLIR apparently has a different design in this regard
that is intended to be more amenable to these situations.<br>
</div>
<br>
If others want to chip it as I've CCed the list, that's fine as well
and this should be up for discussion with the whole community.<br>
<br>
I've given up on the idea of a async pass manager as it seems to
require IR level detection of changed state between passes but maybe
I'm wrong,<br>
<br>
The other part of this discussion before Hey is already on this
thread,<br>
<br>
Nick<br>
<br>
</body>
</html>