<div dir="ltr">Hi Terry,<div>CC'ed lang hames he is the best person to answer.</div><div><br></div><div>In general, ORCv2 is the new and stable JIT environment. In order to have a fast compilation time you can use the -lazy compilation option in ORCv2, this will result in fast compile time and interleave compile time with execution time. You can also use the concurrent compilation option in ORCv2 to speedup. Additionally, we did a new feature called "speculative compilation" in ORcv2 which yields good results for a set of benchmarks. If you are interested please try this out. We would like to have some benchmarks on your case :) </div><div>To try things out you can check out the examples directory in LLVM for ExecutionEngine. </div><div>I hope this helps</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 16 Jun 2020 at 21:10, Terry Guo via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi there,<div><br></div><div>I am trying to JIT a rather big wasm bytecode program to x86 native code and running into this JIT compilation time issue. In the first stage, I use MCJIT to translate wasm bytecode to a single LLVM IR Module which ends up with 927 functions. Then it took a pretty long time to apply several optimization passes to this big IR module and finally generate x86 code. What should I do to shorten the compilation time? Is it possible to compile this single big IR Module with MCJIT in parallel? Is OrcV2 JIT faster than MCJIT? Can the 'concurrent compilation' feature mentioned in Orcv2 webpage help on this? Thanks in advance for any advice.</div><div><br></div><div>This is how I organized the optimization pass:</div><div><br></div><div> LLVMAddBasicAliasAnalysisPass(comp_ctx->pass_mgr);<br> LLVMAddPromoteMemoryToRegisterPass(comp_ctx->pass_mgr);<br> LLVMAddInstructionCombiningPass(comp_ctx->pass_mgr);<br> LLVMAddJumpThreadingPass(comp_ctx->pass_mgr);<br> LLVMAddConstantPropagationPass(comp_ctx->pass_mgr);<br> LLVMAddReassociatePass(comp_ctx->pass_mgr);<br> LLVMAddGVNPass(comp_ctx->pass_mgr);<br> LLVMAddCFGSimplificationPass(comp_ctx->pass_mgr);<br></div><div><br></div><div>This is how I apply passes to my single IR module (which actually includes 927 functions)</div><div><br></div><div>if (comp_ctx->optimize) {<br> LLVMInitializeFunctionPassManager(comp_ctx->pass_mgr);<br> for (i = 0; i < comp_ctx->func_ctx_count; i++)<br> LLVMRunFunctionPassManager(comp_ctx->pass_mgr,<br> comp_ctx->func_ctxes[i]->func);<br> }<br></div><div><br></div><div>BR,</div><div>Terry</div></div>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div>