<div dir="ltr">Hi Terry,<div><br></div><div>As Praveen mentioned, OrcV2 supports concurrent compilation and lazy compilation, both of which may help reduce time-to-execution. There are a number of things to keep in mind as you consider your options though:</div><div><br></div><div>(1) Neither OrcV2 nor MCJIT have any special tricks to speed up compilation of LLVM IR: The IR compilation process is opaque to them, and for the same input IR they will both take the same time*.</div><div>(2) LLVM does not currently support concurrent optimization within a module: Different modules can be compiled concurrently (provided they are attached to different LLVMContexts), but two functions in the same module can not.</div><div>(3) Laziness can help to reduce time-to-execution if some of your IR is either unlikely to be used (in which case you can avoid compiling it altogether) or unlikely to be used until later in program execution (in which case you can defer compilation).</div><div>(4) Concurrency can help reduce the wall-clock time required for compilation if you can break your modules up in a suitable way. If you're relying on whole module optimizations then there are some trade-offs to consider: breaking up a module to enable concurrent compilation may eliminate inlining opportunities. Cloning available_externally function definitions into your module to re-enable inlining opportunities can address this, but adds overhead of its own. Since it appears that you are just doing function-at-a-time optimization (without inlining) you may not have to worry about this.</div><div>(5) Some false dependencies still exist in OrcV2s concurrent compilation system -- these may artificially limit the amount of parallel work you can do at the moment. I have fixes in mind, but I also have some other features and bugs to address first, so fixes may not be available for a while.</div><div><br></div><div>Finally: I'm not sure whether you're just measuring IR optimization time or including CodeGen time too, but the best way to reduce the amount of compilation work to be done is to play around with your optimization settings and look for optimizations that you can discard without having too much impact on generated code quality. You'll want to look at both the IR optimizations and codegen optimization levels for this.</div><div><br></div><div>Regards,</div><div>Lang.</div><div><br></div><div>* Note: OrcV2 and MCJIT will take the same time to compile the same IR once they reach the compilation stage, however Orc's lazy compilation utilities will automatically break up modules before the reach the compiler, so you can't do an apples-to-apples comparison of compile times there.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 17, 2020 at 6:47 PM Terry Guo <<a href="mailto:flameroc@gmail.com">flameroc@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Praveen,</div><div><br></div><div>Thanks for your help. I will follow your suggestions and get back if I can make some progress.</div><div><br></div><div>BR,</div><div>Terry</div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 17, 2020 at 12:12 AM Praveen Velliengiri <<a href="mailto:praveenvelliengiri@gmail.com" target="_blank">praveenvelliengiri@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Terry,<div>CC'ed lang hames he is the best person to answer.</div><div><br></div><div>In general, ORCv2 is the new and stable JIT environment. In order to have a fast compilation time you can use the -lazy compilation option in ORCv2, this will result in fast compile time and interleave compile time with execution time. You can also use the concurrent compilation option in ORCv2 to speedup. Additionally, we did a new feature called "speculative compilation" in ORcv2 which yields good results for a set of benchmarks. If you are interested please try this out. We would like to have some benchmarks on your case :) </div><div>To try things out you can check out the examples directory in LLVM for ExecutionEngine. </div><div>I hope this helps</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 16 Jun 2020 at 21:10, Terry Guo via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi there,<div><br></div><div>I am trying to JIT a rather big wasm bytecode program to x86 native code and running into this JIT compilation time issue. In the first stage, I use MCJIT to translate wasm bytecode to a single LLVM IR Module which ends up with 927 functions. Then it took a pretty long time to apply several optimization passes to this big IR module and finally generate x86 code. What should I do to shorten the compilation time? Is it possible to compile this single big IR Module with MCJIT in parallel? Is OrcV2 JIT faster than MCJIT? Can the 'concurrent compilation' feature mentioned in Orcv2 webpage help on this? Thanks in advance for any advice.</div><div><br></div><div>This is how I organized the optimization pass:</div><div><br></div><div> LLVMAddBasicAliasAnalysisPass(comp_ctx->pass_mgr);<br> LLVMAddPromoteMemoryToRegisterPass(comp_ctx->pass_mgr);<br> LLVMAddInstructionCombiningPass(comp_ctx->pass_mgr);<br> LLVMAddJumpThreadingPass(comp_ctx->pass_mgr);<br> LLVMAddConstantPropagationPass(comp_ctx->pass_mgr);<br> LLVMAddReassociatePass(comp_ctx->pass_mgr);<br> LLVMAddGVNPass(comp_ctx->pass_mgr);<br> LLVMAddCFGSimplificationPass(comp_ctx->pass_mgr);<br></div><div><br></div><div>This is how I apply passes to my single IR module (which actually includes 927 functions)</div><div><br></div><div>if (comp_ctx->optimize) {<br> LLVMInitializeFunctionPassManager(comp_ctx->pass_mgr);<br> for (i = 0; i < comp_ctx->func_ctx_count; i++)<br> LLVMRunFunctionPassManager(comp_ctx->pass_mgr,<br> comp_ctx->func_ctxes[i]->func);<br> }<br></div><div><br></div><div>BR,</div><div>Terry</div></div>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div>
</blockquote></div></div>
</blockquote></div>