<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <br>
    <div class="moz-cite-prefix">On 02/15/2016 04:57 PM, Andrew Trick
      wrote:<br>
    </div>
    <blockquote
      cite="mid:980E4DAB-9A80-434B-B27B-8FDDCBD84408@apple.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <br class="">
      <div>
        <blockquote type="cite" class="">
          <div class="">On Feb 15, 2016, at 4:25 PM, Philip Reames <<a
              moz-do-not-send="true"
              href="mailto:listmail@philipreames.com" class=""><a class="moz-txt-link-abbreviated" href="mailto:listmail@philipreames.com">listmail@philipreames.com</a></a>>
            wrote:</div>
          <br class="Apple-interchange-newline">
          <div class="">
            <meta content="text/html; charset=utf-8"
              http-equiv="Content-Type" class="">
            <div text="#000000" bgcolor="#FFFFFF" class=""> After
              reading <a moz-do-not-send="true"
                href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/"
                class="">https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/</a>.,

              I jotted down a couple of thoughts of my own here:
              <a moz-do-not-send="true"
href="http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/"
                class="">http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/</a><br
                class="">
            </div>
          </div>
        </blockquote>
        <div><br class="">
        </div>
        Thanks for sharing. I think it’s worth noting that what you are
        doing would be considered 5th tier for WebKit, since you already
        had a decent optimizing backend without LLVM. </div>
    </blockquote>
    So, serious, but naive question: what are the other tiers for?  My
    mental model is generally:<br>
    tier 0 - interpreter or splat compiler -- (to deal with run once
    code)<br>
    tier 1 - a fast (above all else) but decent compiler which gets the
    obvious stuff -- (does most of the compilation by methods)<br>
    tier 2 - a good, but fast, compiler which generates good quality
    code without burning too much time -- (specifically for the hotish
    stuff)<br>
    tier 3 - "a compile time does not matter, get this hot method"
    compiler, decidedly optional -- (compiles only *really* hot stuff)<br>
    <br>
    (Profiling is handled by tier 0, and tier 1, in the above.)<br>
    <br>
    It really sounds to me like FTL is positioned somewhere between tier
    1 and tier 2 in the above.  Is that about right?<br>
    <blockquote
      cite="mid:980E4DAB-9A80-434B-B27B-8FDDCBD84408@apple.com"
      type="cite">
      <div>You also have more room for background compilation threads
        and aren’t benchmarking on a MacBook Air.</div>
    </blockquote>
    True! Both definitely matter.<br>
    <blockquote
      cite="mid:980E4DAB-9A80-434B-B27B-8FDDCBD84408@apple.com"
      type="cite">
      <div><br class="">
      </div>
      <div>Andy</div>
      <div><br class="">
        <blockquote type="cite" class="">
          <div class="">
            <div text="#000000" bgcolor="#FFFFFF" class=""> <br
                class="">
              Philip<br class="">
              <br class="">
              <div class="moz-cite-prefix">On 02/15/2016 03:12 PM,
                Andrew Trick via llvm-dev wrote:<br class="">
              </div>
              <blockquote
                cite="mid:7D3EEF0B-8563-4E26-8CEE-B38C3B6CB2EA@apple.com"
                type="cite" class="">
                <meta http-equiv="Content-Type" content="text/html;
                  charset=utf-8" class="">
                <br class="">
                <div class="">
                  <blockquote type="cite" class="">
                    <div class="">On Feb 9, 2016, at 9:55 AM, Rafael
                      Espíndola via llvm-dev <<a
                        moz-do-not-send="true"
                        href="mailto:llvm-dev@lists.llvm.org" class=""><a class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a></a>>

                      wrote:</div>
                    <div class="">
                      <p dir="ltr" class=""> ><br class="">
                        > JavaScriptCore's [FTL JIT](<a
                          moz-do-not-send="true"
                          href="https://trac.webkit.org/wiki/FTLJIT"
                          class=""><a class="moz-txt-link-freetext" href="https://trac.webkit.org/wiki/FTLJIT">https://trac.webkit.org/wiki/FTLJIT</a></a>)
                        is moving away<br class="">
                        > from using LLVM as its backend, towards [B3
                        (Bare Bones<br class="">
                        > Backend)](<a moz-do-not-send="true"
                          href="https://webkit.org/docs/b3/" class="">https://webkit.org/docs/b3/</a>).

                        This includes its own [SSA<br class="">
                        > IR](<a moz-do-not-send="true"
                          href="https://webkit.org/docs/b3/intermediate-representation.html"
                          class="">https://webkit.org/docs/b3/intermediate-</a><a
                          moz-do-not-send="true"
                          href="https://webkit.org/docs/b3/intermediate-representation.html"
                          class="">representation.html</a>),<br class="">
                        > optimisations, and instruction selection
                        backend.</p>
                      <p dir="ltr" class="">In the end, what was the
                        main motivation for creating a new IR?</p>
                    </div>
                  </blockquote>
                </div>
                <div class="">I can't speak to the motivation of the
                  WebKit team. Those are outlined in <a
                    moz-do-not-send="true"
                    href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/"
                    class=""><a class="moz-txt-link-freetext" href="https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/">https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/</a></a>.</div>
                <div class="">I'll give you my personal perspective on
                  using LLVM for JITs, which may be interesting to the
                  LLVM community.</div>
                <div class=""><br class="">
                </div>
                <div class="">Most of the payoff for high level
                  languages comes from the language-specific optimizer.
                  It was simpler for JavaScriptCore to perform loop
                  optimization at that level, so it doesn't even make
                  use of LLVM's most powerful optimizations,
                  particularly SCEV based optimization. There is a
                  relatively small, finite amount of low-level
                  optimization that is going to be important for
                  JavaScript benchmarks (most of InstCombine is not
                  relevant).</div>
                <div class=""><br class="">
                </div>
                <div class="">SelectionDAG ISEL's compile time makes it
                  a very poor choice for a JIT. We never put the effort
                  into making x86 FastISEL competitive for WebKit's
                  needs. The focus now is on Global ISEL, but that won't
                  be ready for a while.</div>
                <div class=""><br class="">
                </div>
                <div class="">Even when LLVM's compile time problems are
                  largely solved, and I believe they can be, there will
                  always be systemic compile time and memory overhead
                  from design decisions that achieve generality,
                  flexibility, and layering. These are software
                  engineering tradeoffs.</div>
                <div class=""><br class="">
                </div>
                <div class="">It is possible to design an extremely
                  lightweight SSA IR that works well in a carefully
                  controlled, fixed optimization pipeline. You then
                  benefit from basic SSA optimizations, which are not
                  hard to write. You end up working with an IR of
                  arrays, where identifiers are indicies into the array.
                  It's a different way of writing passes, but very
                  efficient. It's probably worth it for WebKit, but not
                  LLVM.</div>
                <div class=""><br class="">
                </div>
                <div class="">LLVM's patchpoints and stackmaps features
                  are critical for managed runtimes. However, directly
                  supporting these features in a custom IR is simply
                  more convenient. It takes more time to make design
                  changes to LLVM IR vs. a custom IR. For example, LLVM
                  does not yet support TBAA on calls, which would be
                  very useful for optimizating around patchpoints and
                  runtime calls.</div>
                <div class=""><br class="">
                </div>
                <div class="">Prior to FTL, JavaScriptCore had no
                  dependence on the LLVM project. Maintaining a
                  dependence on an external project naturally has
                  integration overhead.</div>
                <div class=""><br class="">
                </div>
                <div class="">So, while LLVM is not the perfect JIT IR,
                  it is very useful for JIT developers who want a quick
                  solution for low-level optimization and retargetable
                  codegen. WebKit FTL was a great example of using it to
                  bootstrap a higher tier JIT.</div>
                <div class=""><br class="">
                </div>
                <div class="">To that end, I think it is important for
                  LLVM to have a well-supported -Ojit pipeline (compile
                  fast) with the right set of passes for higher-level
                  languages (e.g. Tail Duplication).</div>
                <div class=""><br class="">
                </div>
                <div class="">-Andy</div>
                <br class="">
                <fieldset class="mimeAttachmentHeader"></fieldset>
                <br class="">
                <pre class="" wrap="">_______________________________________________
LLVM Developers mailing list
<a moz-do-not-send="true" class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a>
</pre>
              </blockquote>
              <br class="">
            </div>
          </div>
        </blockquote>
      </div>
      <br class="">
    </blockquote>
    <br>
  </body>
</html>