<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Feb 9, 2016, at 9:55 AM, Rafael EspĂndola via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a>> wrote:</div><div class=""><p dir="ltr" class="">
><br class="">
> JavaScriptCore's [FTL JIT](<a href="https://trac.webkit.org/wiki/FTLJIT" class="">https://trac.webkit.org/wiki/FTLJIT</a>) is moving away<br class="">
> from using LLVM as its backend, towards [B3 (Bare Bones<br class="">
> Backend)](<a href="https://webkit.org/docs/b3/" class="">https://webkit.org/docs/b3/</a>). This includes its own [SSA<br class="">
> IR](<a href="https://webkit.org/docs/b3/intermediate-representation.html" class="">https://webkit.org/docs/b3/intermediate-</a><a href="https://webkit.org/docs/b3/intermediate-representation.html" class="">representation.html</a>),<br class="">
> optimisations, and instruction selection backend.</p><p dir="ltr" class="">In the end, what was the main motivation for creating a new IR?</p></div></blockquote></div><div class="">I can't speak to the motivation of the WebKit team. Those are outlined in <a href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/" class="">https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/</a>.</div><div class="">I'll give you my personal perspective on using LLVM for JITs, which may be interesting to the LLVM community.</div><div class=""><br class=""></div><div class="">Most of the payoff for high level languages comes from the language-specific optimizer. It was simpler for JavaScriptCore to perform loop optimization at that level, so it doesn't even make use of LLVM's most powerful optimizations, particularly SCEV based optimization. There is a relatively small, finite amount of low-level optimization that is going to be important for JavaScript benchmarks (most of InstCombine is not relevant).</div><div class=""><br class=""></div><div class="">SelectionDAG ISEL's compile time makes it a very poor choice for a JIT. We never put the effort into making x86 FastISEL competitive for WebKit's needs. The focus now is on Global ISEL, but that won't be ready for a while.</div><div class=""><br class=""></div><div class="">Even when LLVM's compile time problems are largely solved, and I believe they can be, there will always be systemic compile time and memory overhead from design decisions that achieve generality, flexibility, and layering. These are software engineering tradeoffs.</div><div class=""><br class=""></div><div class="">It is possible to design an extremely lightweight SSA IR that works well in a carefully controlled, fixed optimization pipeline. You then benefit from basic SSA optimizations, which are not hard to write. You end up working with an IR of arrays, where identifiers are indicies into the array. It's a different way of writing passes, but very efficient. It's probably worth it for WebKit, but not LLVM.</div><div class=""><br class=""></div><div class="">LLVM's patchpoints and stackmaps features are critical for managed runtimes. However, directly supporting these features in a custom IR is simply more convenient. It takes more time to make design changes to LLVM IR vs. a custom IR. For example, LLVM does not yet support TBAA on calls, which would be very useful for optimizating around patchpoints and runtime calls.</div><div class=""><br class=""></div><div class="">Prior to FTL, JavaScriptCore had no dependence on the LLVM project. Maintaining a dependence on an external project naturally has integration overhead.</div><div class=""><br class=""></div><div class="">So, while LLVM is not the perfect JIT IR, it is very useful for JIT developers who want a quick solution for low-level optimization and retargetable codegen. WebKit FTL was a great example of using it to bootstrap a higher tier JIT.</div><div class=""><br class=""></div><div class="">To that end, I think it is important for LLVM to have a well-supported -Ojit pipeline (compile fast) with the right set of passes for higher-level languages (e.g. Tail Duplication).</div><div class=""><br class=""></div><div class="">-Andy</div></body></html>