[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
Philip Reames via llvm-dev
llvm-dev at lists.llvm.org
Mon Feb 15 16:25:55 PST 2016
https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/., I jotted
down a couple of thoughts of my own here:
On 02/15/2016 03:12 PM, Andrew Trick via llvm-dev wrote:
>> On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev
>> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>> moving away
>> > from using LLVM as its backend, towards [B3 (Bare Bones
>> > Backend)](https://webkit.org/docs/b3/). This includes its own [SSA
>> > IR](https://webkit.org/docs/b3/intermediate-
>> > optimisations, and instruction selection backend.
>> In the end, what was the main motivation for creating a new IR?
> I can't speak to the motivation of the WebKit team. Those are outlined
> in https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/.
> I'll give you my personal perspective on using LLVM for JITs, which
> may be interesting to the LLVM community.
> Most of the payoff for high level languages comes from the
> perform loop optimization at that level, so it doesn't even make use
> of LLVM's most powerful optimizations, particularly SCEV based
> optimization. There is a relatively small, finite amount of low-level
> (most of InstCombine is not relevant).
> SelectionDAG ISEL's compile time makes it a very poor choice for a
> JIT. We never put the effort into making x86 FastISEL competitive for
> WebKit's needs. The focus now is on Global ISEL, but that won't be
> ready for a while.
> Even when LLVM's compile time problems are largely solved, and I
> believe they can be, there will always be systemic compile time and
> memory overhead from design decisions that achieve generality,
> flexibility, and layering. These are software engineering tradeoffs.
> It is possible to design an extremely lightweight SSA IR that works
> well in a carefully controlled, fixed optimization pipeline. You then
> benefit from basic SSA optimizations, which are not hard to write. You
> end up working with an IR of arrays, where identifiers are indicies
> into the array. It's a different way of writing passes, but very
> efficient. It's probably worth it for WebKit, but not LLVM.
> LLVM's patchpoints and stackmaps features are critical for managed
> runtimes. However, directly supporting these features in a custom IR
> is simply more convenient. It takes more time to make design changes
> to LLVM IR vs. a custom IR. For example, LLVM does not yet support
> TBAA on calls, which would be very useful for optimizating around
> patchpoints and runtime calls.
> Maintaining a dependence on an external project naturally has
> integration overhead.
> So, while LLVM is not the perfect JIT IR, it is very useful for JIT
> developers who want a quick solution for low-level optimization and
> retargetable codegen. WebKit FTL was a great example of using it to
> bootstrap a higher tier JIT.
> To that end, I think it is important for LLVM to have a well-supported
> -Ojit pipeline (compile fast) with the right set of passes for
> higher-level languages (e.g. Tail Duplication).
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev