<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<div class="moz-cite-prefix">On 10/11/2014 07:20 PM, Hayden
Livingston wrote:<br>
</div>
<blockquote
cite="mid:CAMxMwyJ-yiVGY=FRdTVo5W1dXU-ty7bZdJXTRL9g4UDfF87VXQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<p>Thanks, Philip for the "lay of the ground" picture. I think
the situation I'm in, which represents my employment (and now
personal technical curiosity) is that we're seeing LLVM
implementations show up like every other week or month, etc.
and people are asking us, "well this mathematical software of
yours is great, but my engineer here tells me it's not using
this LLVM thing, and I think we're wasting cloud compute
resources on by using the JVM technology" -- this is how
non-tech people are talking to me about this :-)</p>
</div>
</blockquote>
A couple of quick reactions here:<br>
1) PR is a bad reason to make a technical choice (unless your job
depends on it)<br>
2) If your bytecode is reasonably idiomatic, beating the JVM is a
surprisingly high bar. Unless you've spent a *lot* of time tuning
your existing implementation, that's almost certainly a better use
of technical resources. <br>
3) LLVM could probably be made to work and even come out ahead, but
it's a large investment. (Keep in mind, I know nothing about your
language and am speaking in generalities. Details matter here.)<br>
<br>
If your seriously interested in using LLVM, I strongly suggest
attending the dev conference in a few weeks and chatting with folks
in person. You'll get a lot of valuable advice it's hard to
duplicate over email. <br>
<br>
<blockquote
cite="mid:CAMxMwyJ-yiVGY=FRdTVo5W1dXU-ty7bZdJXTRL9g4UDfF87VXQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>I heard the LLVM JIT situation from a bunch of my friends,
one of whom was part of the Unladen Swallow effort and
basically said -- "Trust me, it's not going to work, I put 2
years of my life every single day into it".</div>
<div><br>
</div>
<div>But honestly, I personally am not familiar with writing a
GC or what necessarily entails -- I want to, and I can pick it
up, but I spent most of time writing JVM based tooling,
profilers, and byte code cachers, etc.</div>
<div><br>
</div>
<div>With regards to ReadyNow, I think at least someone on my
team was looking at it.</div>
<div><br>
</div>
<div>In any case, I'll be following your blog closely now!</div>
<div><br>
</div>
<div class="gmail_extra">
<div class="gmail_quote">On Sat, Oct 11, 2014 at 5:15 PM,
Philip Reames <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:listmail@philipreames.com" target="_blank">listmail@philipreames.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><span>On
10/10/2014 06:24 PM, Hayden Livingston wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px
0px
0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">Hello,<br>
<br>
I was wondering if there is an example list somewhere
of whole program optimizations done by LLVM based
compilers?<br>
<br>
I'm only familiar with method-level optimizations, and
I'm being told wpo can deliver many great speedups.<br>
<br>
My language is currently staticly typed JIT based and
uses the JVM, and I want to move it over to LLVM so
that I can have options where it can be ahead of time
compiled as well.<br>
</blockquote>
</span>
Depending on your use case (and frankly, your budget), you
might want to consider Azul Zing's ReadyNow features: <a
moz-do-not-send="true"
href="http://www.azulsystems.com/solutions/zing/readynow"
target="_blank">http://www.azulsystems.com/solutions/zing/readynow</a><br>
<br>
This isn't true ahead of time compilation, but it would be
a way to get most of the benefits of classic ahead of time
compilation running on a standards compliant JVM.<br>
<br>
(Keep in mind, I work for Azul. I may be slightly biased
here.)<span><br>
<blockquote class="gmail_quote" style="margin:0px 0px
0px
0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><br>
I'm hearing bad things about LLVM's JIT capabilities
-- specifically that writing your own GC is going to
be a pain.<br>
</blockquote>
</span>
Out of curiosity, where did you hear this?<br>
<br>
We are actively working on improving the state of the
world here. I'd suggest you take a look at the
infrastructure patches currently up for review here: <a
moz-do-not-send="true"
href="http://reviews.llvm.org/D5683" target="_blank">http://reviews.llvm.org/D5683</a><br>
<br>
These will hopefully land within a week or two. At that
point, the "gc infrastructure" part should be functional.
You'd have to pick a GC (LLVM does not provide one), but
you're frontend could emit barriers and statepoints (gc
parseable callsites) and everything should work. (Well,
modulo bugs! Which I want to know about so we can fix.)<br>
<br>
There are a couple of options out there for pluggable GC
libraries. The best well known is Boehm's conservative GC,
but there are others.<br>
<br>
Once that's in, we're planning on landing all of the late
safepoint insertion logic we've been working on. This
will enable full optimization of code for garbage
collected languages - provided you meet a few requirements
on the input IR. You can read about it here:<br>
<a moz-do-not-send="true"
href="http://www.philipreames.com/Blog/tag/late-safepoint-placement/"
target="_blank">http://www.philipreames.com/Blog/tag/late-safepoint-placement/</a><br>
<br>
And find the (slightly out of date) code here:<br>
<a moz-do-not-send="true"
href="https://github.com/AzulSystems/llvm-late-safepoint-placement"
target="_blank">https://github.com/AzulSystems/llvm-late-safepoint-placement</a><span><br>
<blockquote class="gmail_quote" style="margin:0px 0px
0px
0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid"><br>
Anyways, sort of diverged there, but still looking for
WPO examples!<br>
</blockquote>
</span>
I'm curious to hear others take here as well. A few
things that jump out at me: cross function escape
analysis, alias analysis (in support of things like LICM),
and cross function constant propagation. Not all of these
work out of the box, but with work (sometimes on your
side, sometimes an LLVM patch), interesting results can be
had.<br>
<br>
Fair warning, while getting an LLVM based JIT up and
running at peak performance is a worthwhile endeavor
(IMHO), it's also a fair amount of work. Getting
something functional is relatively straight forward, but
there's a lot of non-trivial tuning of your generated IR
to really exploit the power of the optimizers well.
We've talking person years of work here. Most of this is
in the performance tuning phase, and depending on your
point of comparison, it may be an easier or harder
problem. Essentially, the closer to C performance your
current runtime is, the harder you'll have to work.
Getting 1/10 of C performance with an untuned LLVM based
JIT is pretty easy; the closer you get to C (or JVM)
performance the harder it gets.<br>
<br>
(Disclaimer: This is me speaking off the top of my head.
Take everything I just said with a grain of salt.)<span><font
color="#888888"><br>
<br>
Philip<br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>