[PATCH] D106235: Reformulate OrcJIT tutorial doc to make it more clear.

Yuichi Yoshida via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Sep 2 17:29:32 PDT 2021


sonson updated this revision to Diff 370456.
sonson added a comment.

Fixed the patch, by rebasing.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D106235/new/

https://reviews.llvm.org/D106235

Files:
  llvm/docs/tutorial/BuildingAJIT2.rst


Index: llvm/docs/tutorial/BuildingAJIT2.rst
===================================================================
--- llvm/docs/tutorial/BuildingAJIT2.rst
+++ llvm/docs/tutorial/BuildingAJIT2.rst
@@ -227,34 +227,33 @@
 concept: A layer is a way to wrap a part of a compiler pipeline (in this case
 the "opt" phase of an LLVM compiler) whose API is opaque to ORC with an
 interface that ORC can call as needed. The add method takes an
-module in some input program representation (in this case an LLVM IR module) and
-stores it in the target JITDylib, arranging for it to be passed back to the
-Layer's emit method when any symbol defined by that module is requested. Layers
-can compose neatly by calling the 'emit' method of a base layer to complete
-their work. For example, in this tutorial our IRTransformLayer calls through to
-our IRCompileLayer to compile the transformed IR, and our IRCompileLayer in turn
-calls our ObjectLayer to link the object file produced by our compiler.
-
-
-So far we have learned how to optimize and compile our LLVM IR, but we have not
-focused on when compilation happens. Our current REPL is eager: Each function
-definition is optimized and compiled as soon as it is referenced by any other
-code, regardless of whether it is ever called at runtime. In the next chapter we
-will introduce fully lazy compilation, in which functions are not compiled until
-they are first called at run-time. At this point the trade-offs get much more
-interesting: the lazier we are, the quicker we can start executing the first
-function, but the more often we will have to pause to compile newly encountered
-functions. If we only code-gen lazily, but optimize eagerly, we will have a
-longer startup time (as everything is optimized) but relatively short pauses as
-each function just passes through code-gen. If we both optimize and code-gen
-lazily we can start executing the first function more quickly, but we will have
-longer pauses as each function has to be both optimized and code-gen'd when it
-is first executed. Things become even more interesting if we consider
-interprocedural optimizations like inlining, which must be performed eagerly.
-These are complex trade-offs, and there is no one-size-fits all solution to
-them, but by providing composable layers we leave the decisions to the person
-implementing the JIT, and make it easy for them to experiment with different
-configurations.
+module in some input program representation (in this case an LLVM IR module)
+and stores it in the target ``JITDylib``, arranging for it to be passed back
+to layer's emit method when any symbol defined by that module is requested. 
+Each layer can complete its own work by calling the ``emit`` method of its base
+layer. For example, in this tutorial our IRTransformLayer calls through to
+our IRCompileLayer to compile the transformed IR, and our IRCompileLayer in
+turn calls our ObjectLayer to link the object file produced by our compiler.
+
+So far we have learned how to optimize and compile our LLVM IR, but we have
+not focused on when compilation happens. Our current REPL optimizes and
+compiles each function as soon as it is referenced by any other code,
+regardless of whether it is ever called at runtime. In the next chapter we
+will introduce a fully lazy compilation, in which functions are not compiled
+until they are first called at run-time. At this point the trade-offs get much
+more interesting: the lazier we are, the quicker we can start executing the
+first function, but the more often we will have to pause to compile newly
+encountered functions. If we only code-gen lazily, but optimize eagerly, we
+will have a longer startup time (as everything is optimized at that time) but
+relatively short pauses as each function just passes through code-gen. If we
+both optimize and code-gen lazily we can start executing the first function
+more quickly, but we will have longer pauses as each function has to be both
+optimized and code-gen'd when it is first executed. Things become even more
+interesting if we consider interprocedural optimizations like inlining, which
+must be performed eagerly. These are complex trade-offs, and there is no
+one-size-fits all solution to them, but by providing composable layers we leave
+the decisions to the person implementing the JIT, and make it easy for them to
+experiment with different configurations.
 
 `Next: Adding Per-function Lazy Compilation <BuildingAJIT3.html>`_
 


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D106235.370456.patch
Type: text/x-patch
Size: 4478 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20210903/5078a445/attachment.bin>


More information about the llvm-commits mailing list