[LLVMdev] whole program optimization examples?
Philip Reames
listmail at philipreames.com
Mon Oct 13 14:57:34 PDT 2014
On 10/11/2014 07:20 PM, Hayden Livingston wrote:
>
> Thanks, Philip for the "lay of the ground" picture. I think the
> situation I'm in, which represents my employment (and now
> personal technical curiosity) is that we're seeing LLVM
> implementations show up like every other week or month, etc. and
> people are asking us, "well this mathematical software of yours is
> great, but my engineer here tells me it's not using this LLVM thing,
> and I think we're wasting cloud compute resources on by using the JVM
> technology" -- this is how non-tech people are talking to me about
> this :-)
>
A couple of quick reactions here:
1) PR is a bad reason to make a technical choice (unless your job
depends on it)
2) If your bytecode is reasonably idiomatic, beating the JVM is a
surprisingly high bar. Unless you've spent a *lot* of time tuning your
existing implementation, that's almost certainly a better use of
technical resources.
3) LLVM could probably be made to work and even come out ahead, but it's
a large investment. (Keep in mind, I know nothing about your language
and am speaking in generalities. Details matter here.)
If your seriously interested in using LLVM, I strongly suggest attending
the dev conference in a few weeks and chatting with folks in person.
You'll get a lot of valuable advice it's hard to duplicate over email.
> I heard the LLVM JIT situation from a bunch of my friends, one of whom
> was part of the Unladen Swallow effort and basically said -- "Trust
> me, it's not going to work, I put 2 years of my life every single day
> into it".
>
> But honestly, I personally am not familiar with writing a GC or what
> necessarily entails -- I want to, and I can pick it up, but I spent
> most of time writing JVM based tooling, profilers, and byte code
> cachers, etc.
>
> With regards to ReadyNow, I think at least someone on my team was
> looking at it.
>
> In any case, I'll be following your blog closely now!
>
> On Sat, Oct 11, 2014 at 5:15 PM, Philip Reames
> <listmail at philipreames.com <mailto:listmail at philipreames.com>> wrote:
>
> On 10/10/2014 06:24 PM, Hayden Livingston wrote:
>
> Hello,
>
> I was wondering if there is an example list somewhere of whole
> program optimizations done by LLVM based compilers?
>
> I'm only familiar with method-level optimizations, and I'm
> being told wpo can deliver many great speedups.
>
> My language is currently staticly typed JIT based and uses the
> JVM, and I want to move it over to LLVM so that I can have
> options where it can be ahead of time compiled as well.
>
> Depending on your use case (and frankly, your budget), you might
> want to consider Azul Zing's ReadyNow features:
> http://www.azulsystems.com/solutions/zing/readynow
>
> This isn't true ahead of time compilation, but it would be a way
> to get most of the benefits of classic ahead of time compilation
> running on a standards compliant JVM.
>
> (Keep in mind, I work for Azul. I may be slightly biased here.)
>
>
> I'm hearing bad things about LLVM's JIT capabilities --
> specifically that writing your own GC is going to be a pain.
>
> Out of curiosity, where did you hear this?
>
> We are actively working on improving the state of the world here.
> I'd suggest you take a look at the infrastructure patches
> currently up for review here: http://reviews.llvm.org/D5683
>
> These will hopefully land within a week or two. At that point,
> the "gc infrastructure" part should be functional. You'd have to
> pick a GC (LLVM does not provide one), but you're frontend could
> emit barriers and statepoints (gc parseable callsites) and
> everything should work. (Well, modulo bugs! Which I want to know
> about so we can fix.)
>
> There are a couple of options out there for pluggable GC
> libraries. The best well known is Boehm's conservative GC, but
> there are others.
>
> Once that's in, we're planning on landing all of the late
> safepoint insertion logic we've been working on. This will enable
> full optimization of code for garbage collected languages -
> provided you meet a few requirements on the input IR. You can
> read about it here:
> http://www.philipreames.com/Blog/tag/late-safepoint-placement/
>
> And find the (slightly out of date) code here:
> https://github.com/AzulSystems/llvm-late-safepoint-placement
>
>
> Anyways, sort of diverged there, but still looking for WPO
> examples!
>
> I'm curious to hear others take here as well. A few things that
> jump out at me: cross function escape analysis, alias analysis (in
> support of things like LICM), and cross function constant
> propagation. Not all of these work out of the box, but with work
> (sometimes on your side, sometimes an LLVM patch), interesting
> results can be had.
>
> Fair warning, while getting an LLVM based JIT up and running at
> peak performance is a worthwhile endeavor (IMHO), it's also a fair
> amount of work. Getting something functional is relatively
> straight forward, but there's a lot of non-trivial tuning of your
> generated IR to really exploit the power of the optimizers well.
> We've talking person years of work here. Most of this is in the
> performance tuning phase, and depending on your point of
> comparison, it may be an easier or harder problem. Essentially,
> the closer to C performance your current runtime is, the harder
> you'll have to work. Getting 1/10 of C performance with an untuned
> LLVM based JIT is pretty easy; the closer you get to C (or JVM)
> performance the harder it gets.
>
> (Disclaimer: This is me speaking off the top of my head. Take
> everything I just said with a grain of salt.)
>
> Philip
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141013/c543da7a/attachment.html>
More information about the llvm-dev
mailing list