<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 02/15/2016 11:42 PM, Andrew Trick
wrote:<br>
</div>
<blockquote
cite="mid:2353F258-86F2-4537-8BE4-C132C3AB76A1@apple.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<br class="">
<div>
<blockquote type="cite" class="">
<div class="">On Feb 15, 2016, at 5:34 PM, Philip Reames <<a
moz-do-not-send="true"
href="mailto:listmail@philipreames.com" class=""><a class="moz-txt-link-abbreviated" href="mailto:listmail@philipreames.com">listmail@philipreames.com</a></a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type" class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> <br
class="">
<br class="">
<div class="moz-cite-prefix">On 02/15/2016 04:57 PM,
Andrew Trick wrote:<br class="">
</div>
<blockquote
cite="mid:980E4DAB-9A80-434B-B27B-8FDDCBD84408@apple.com"
type="cite" class="">
<meta http-equiv="Content-Type" content="text/html;
charset=utf-8" class="">
<br class="">
<div class="">
<blockquote type="cite" class="">
<div class="">On Feb 15, 2016, at 4:25 PM, Philip
Reames <<a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:listmail@philipreames.com">listmail@philipreames.com</a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type" class="">
<div text="#000000" bgcolor="#FFFFFF" class="">
After reading <a moz-do-not-send="true"
href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/"
class="">https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/</a>.,
I jotted down a couple of thoughts of my own
here: <a moz-do-not-send="true"
href="http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/"
class="">http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/</a><br
class="">
</div>
</div>
</blockquote>
<div class=""><br class="">
</div>
Thanks for sharing. I think it’s worth noting that
what you are doing would be considered 5th tier for
WebKit, since you already had a decent optimizing
backend without LLVM. </div>
</blockquote>
So, serious, but naive question: what are the other tiers
for? My mental model is generally:</div>
</div>
</blockquote>
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> tier 0 -
interpreter or splat compiler -- (to deal with run once
code)<br class="">
</div>
</div>
</blockquote>
<div><br class="">
</div>
You combined two tiers in one, and I start at 1. So using my
terminology inspired by WebKit:</div>
<div>tier 1: interpreter</div>
<div>tier 2: splat compiler</div>
</blockquote>
Ah, this was the piece I was missing. I didn't realize you had both
an interpreter and a splat compiler. That makes the numbering make
a lot more sense. I had come up with the possible off-by-one
myself, but that didn't fully explain the difference. <br>
<br>
Can you say anything about the reasoning for having both? Do you
see warmish code that the splat compiler is worthwhile? I'm used to
interpreters and splat compilers being positioned as an either-or
choice. When do you decide to promote something to the splat
compiler, but not the "tier 3" compiler?<br>
<blockquote
cite="mid:2353F258-86F2-4537-8BE4-C132C3AB76A1@apple.com"
type="cite">
<div><br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> tier 1 - a
fast (above all else) but decent compiler which gets the
obvious stuff -- (does most of the compilation by methods)<br
class="">
</div>
</div>
</blockquote>
<div><br class="">
</div>
or tier 3: compiling methods into IR or bytecode, applying
high-level optimization, splatting codegen</div>
<div><br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> tier 2 - a
good, but fast, compiler which generates good quality code
without burning too much time -- (specifically for the
hotish stuff)<br class="">
</div>
</div>
</blockquote>
<div><br class="">
</div>
<div>or tier 4: high level optimization using profile data from
tier3, nontrivial codegen</div>
<br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> tier 3 - "a
compile time does not matter, get this hot method"
compiler, decidedly optional -- (compiles only *really*
hot stuff)<br class="">
</div>
</div>
</blockquote>
<div><br class="">
</div>
or tier 5: bolt a C compiler onto the JIT.</div>
<div><br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> <br
class="">
(Profiling is handled by tier 0, and tier 1, in the
above.)<br class="">
</div>
</div>
</blockquote>
<div><br class="">
</div>
Profiling needs to be done by all tiers up to and including at
least the first round of high-level optimization where the
optimizer registers some assumptions about runtime types (tier
3 in my case).</div>
<div><br class="">
</div>
<div>I’m not saying it’s a good idea to have all those tiers, it’s
just a way to compare JIT levels. The point is, you are a tier
higher than B3.</div>
<div><br class="">
</div>
<div>- Andy</div>
<div><br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> <br
class="">
It really sounds to me like FTL is positioned somewhere
between tier 1 and tier 2 in the above. Is that about
right?<br class="">
<blockquote
cite="mid:980E4DAB-9A80-434B-B27B-8FDDCBD84408@apple.com"
type="cite" class="">
<div class="">You also have more room for background
compilation threads and aren’t benchmarking on a
MacBook Air.</div>
</blockquote>
True! Both definitely matter.<br class="">
<blockquote
cite="mid:980E4DAB-9A80-434B-B27B-8FDDCBD84408@apple.com"
type="cite" class="">
<div class=""><br class="">
</div>
<div class="">Andy</div>
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">
<div text="#000000" bgcolor="#FFFFFF" class=""> <br
class="">
Philip<br class="">
<br class="">
<div class="moz-cite-prefix">On 02/15/2016 03:12
PM, Andrew Trick via llvm-dev wrote:<br
class="">
</div>
<blockquote
cite="mid:7D3EEF0B-8563-4E26-8CEE-B38C3B6CB2EA@apple.com"
type="cite" class="">
<meta http-equiv="Content-Type"
content="text/html; charset=utf-8" class="">
<br class="">
<div class="">
<blockquote type="cite" class="">
<div class="">On Feb 9, 2016, at 9:55 AM,
Rafael Espíndola via llvm-dev <<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:llvm-dev@lists.llvm.org"><a class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a></a>>
wrote:</div>
<div class="">
<p dir="ltr" class=""> ><br class="">
> JavaScriptCore's [FTL JIT](<a
moz-do-not-send="true"
class="moz-txt-link-freetext"
href="https://trac.webkit.org/wiki/FTLJIT"><a class="moz-txt-link-freetext" href="https://trac.webkit.org/wiki/FTLJIT">https://trac.webkit.org/wiki/FTLJIT</a></a>)
is moving away<br class="">
> from using LLVM as its backend,
towards [B3 (Bare Bones<br class="">
> Backend)](<a
moz-do-not-send="true"
href="https://webkit.org/docs/b3/"
class=""><a class="moz-txt-link-freetext" href="https://webkit.org/docs/b3/">https://webkit.org/docs/b3/</a></a>).
This includes its own [SSA<br class="">
> IR](<a moz-do-not-send="true"
href="https://webkit.org/docs/b3/intermediate-representation.html"
class="">https://webkit.org/docs/b3/intermediate-</a><a
moz-do-not-send="true"
href="https://webkit.org/docs/b3/intermediate-representation.html"
class="">representation.html</a>),<br
class="">
> optimisations, and instruction
selection backend.</p>
<p dir="ltr" class="">In the end, what
was the main motivation for creating a
new IR?</p>
</div>
</blockquote>
</div>
<div class="">I can't speak to the motivation
of the WebKit team. Those are outlined in <a
moz-do-not-send="true"
class="moz-txt-link-freetext"
href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/"><a class="moz-txt-link-freetext" href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/">https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/</a></a>.</div>
<div class="">I'll give you my personal
perspective on using LLVM for JITs, which
may be interesting to the LLVM community.</div>
<div class=""><br class="">
</div>
<div class="">Most of the payoff for high
level languages comes from the
language-specific optimizer. It was simpler
for JavaScriptCore to perform loop
optimization at that level, so it doesn't
even make use of LLVM's most powerful
optimizations, particularly SCEV based
optimization. There is a relatively small,
finite amount of low-level optimization that
is going to be important for JavaScript
benchmarks (most of InstCombine is not
relevant).</div>
<div class=""><br class="">
</div>
<div class="">SelectionDAG ISEL's compile time
makes it a very poor choice for a JIT. We
never put the effort into making x86
FastISEL competitive for WebKit's needs. The
focus now is on Global ISEL, but that won't
be ready for a while.</div>
<div class=""><br class="">
</div>
<div class="">Even when LLVM's compile time
problems are largely solved, and I believe
they can be, there will always be systemic
compile time and memory overhead from design
decisions that achieve generality,
flexibility, and layering. These are
software engineering tradeoffs.</div>
<div class=""><br class="">
</div>
<div class="">It is possible to design an
extremely lightweight SSA IR that works well
in a carefully controlled, fixed
optimization pipeline. You then benefit from
basic SSA optimizations, which are not hard
to write. You end up working with an IR of
arrays, where identifiers are indicies into
the array. It's a different way of writing
passes, but very efficient. It's probably
worth it for WebKit, but not LLVM.</div>
<div class=""><br class="">
</div>
<div class="">LLVM's patchpoints and stackmaps
features are critical for managed runtimes.
However, directly supporting these features
in a custom IR is simply more convenient. It
takes more time to make design changes to
LLVM IR vs. a custom IR. For example, LLVM
does not yet support TBAA on calls, which
would be very useful for optimizating around
patchpoints and runtime calls.</div>
<div class=""><br class="">
</div>
<div class="">Prior to FTL, JavaScriptCore had
no dependence on the LLVM project.
Maintaining a dependence on an external
project naturally has integration overhead.</div>
<div class=""><br class="">
</div>
<div class="">So, while LLVM is not the
perfect JIT IR, it is very useful for JIT
developers who want a quick solution for
low-level optimization and retargetable
codegen. WebKit FTL was a great example of
using it to bootstrap a higher tier JIT.</div>
<div class=""><br class="">
</div>
<div class="">To that end, I think it is
important for LLVM to have a well-supported
-Ojit pipeline (compile fast) with the right
set of passes for higher-level languages
(e.g. Tail Duplication).</div>
<div class=""><br class="">
</div>
<div class="">-Andy</div>
<br class="">
<fieldset class="mimeAttachmentHeader"></fieldset>
<br class="">
<pre class="" wrap="">_______________________________________________
LLVM Developers mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a>
</pre>
</blockquote>
<br class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</blockquote>
<br class="">
</div>
</div>
</blockquote>
</div>
<br class="">
</blockquote>
<br>
</body>
</html>