[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)

Philip Reames via llvm-dev llvm-dev at lists.llvm.org
Mon Feb 15 17:34:59 PST 2016

On 02/15/2016 04:57 PM, Andrew Trick wrote:
>> On Feb 15, 2016, at 4:25 PM, Philip Reames <listmail at philipreames.com 
>> <mailto:listmail at philipreames.com>> wrote:
>> After reading 
>> https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/., I 
>> jotted down a couple of thoughts of my own here: 
>> http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/
> Thanks for sharing. I think it’s worth noting that what you are doing 
> would be considered 5th tier for WebKit, since you already had a 
> decent optimizing backend without LLVM.
So, serious, but naive question: what are the other tiers for?  My 
mental model is generally:
tier 0 - interpreter or splat compiler -- (to deal with run once code)
tier 1 - a fast (above all else) but decent compiler which gets the 
obvious stuff -- (does most of the compilation by methods)
tier 2 - a good, but fast, compiler which generates good quality code 
without burning too much time -- (specifically for the hotish stuff)
tier 3 - "a compile time does not matter, get this hot method" compiler, 
decidedly optional -- (compiles only *really* hot stuff)

(Profiling is handled by tier 0, and tier 1, in the above.)

It really sounds to me like FTL is positioned somewhere between tier 1 
and tier 2 in the above.  Is that about right?
> You also have more room for background compilation threads and aren’t 
> benchmarking on a MacBook Air.
True! Both definitely matter.
> Andy
>> Philip
>> On 02/15/2016 03:12 PM, Andrew Trick via llvm-dev wrote:
>>>> On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev 
>>>> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>>> >
>>>> > JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT) 
>>>> is moving away
>>>> > from using LLVM as its backend, towards [B3 (Bare Bones
>>>> > Backend)](https://webkit.org/docs/b3/). This includes its own [SSA
>>>> > IR](https://webkit.org/docs/b3/intermediate- 
>>>> <https://webkit.org/docs/b3/intermediate-representation.html>representation.html 
>>>> <https://webkit.org/docs/b3/intermediate-representation.html>),
>>>> > optimisations, and instruction selection backend.
>>>> In the end, what was the main motivation for creating a new IR?
>>> I can't speak to the motivation of the WebKit team. Those are 
>>> outlined in 
>>> https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/.
>>> I'll give you my personal perspective on using LLVM for JITs, which 
>>> may be interesting to the LLVM community.
>>> Most of the payoff for high level languages comes from the 
>>> language-specific optimizer. It was simpler for JavaScriptCore to 
>>> perform loop optimization at that level, so it doesn't even make use 
>>> of LLVM's most powerful optimizations, particularly SCEV based 
>>> optimization. There is a relatively small, finite amount of 
>>> low-level optimization that is going to be important for JavaScript 
>>> benchmarks (most of InstCombine is not relevant).
>>> SelectionDAG ISEL's compile time makes it a very poor choice for a 
>>> JIT. We never put the effort into making x86 FastISEL competitive 
>>> for WebKit's needs. The focus now is on Global ISEL, but that won't 
>>> be ready for a while.
>>> Even when LLVM's compile time problems are largely solved, and I 
>>> believe they can be, there will always be systemic compile time and 
>>> memory overhead from design decisions that achieve generality, 
>>> flexibility, and layering. These are software engineering tradeoffs.
>>> It is possible to design an extremely lightweight SSA IR that works 
>>> well in a carefully controlled, fixed optimization pipeline. You 
>>> then benefit from basic SSA optimizations, which are not hard to 
>>> write. You end up working with an IR of arrays, where identifiers 
>>> are indicies into the array. It's a different way of writing passes, 
>>> but very efficient. It's probably worth it for WebKit, but not LLVM.
>>> LLVM's patchpoints and stackmaps features are critical for managed 
>>> runtimes. However, directly supporting these features in a custom IR 
>>> is simply more convenient. It takes more time to make design changes 
>>> to LLVM IR vs. a custom IR. For example, LLVM does not yet support 
>>> TBAA on calls, which would be very useful for optimizating around 
>>> patchpoints and runtime calls.
>>> Prior to FTL, JavaScriptCore had no dependence on the LLVM project. 
>>> Maintaining a dependence on an external project naturally has 
>>> integration overhead.
>>> So, while LLVM is not the perfect JIT IR, it is very useful for JIT 
>>> developers who want a quick solution for low-level optimization and 
>>> retargetable codegen. WebKit FTL was a great example of using it to 
>>> bootstrap a higher tier JIT.
>>> To that end, I think it is important for LLVM to have a 
>>> well-supported -Ojit pipeline (compile fast) with the right set of 
>>> passes for higher-level languages (e.g. Tail Duplication).
>>> -Andy
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> llvm-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160215/33d56c64/attachment.html>

More information about the llvm-dev mailing list