<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">Hi Chris,<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">When splitting the module and compiling, and setting threads to 2, it was taking roughly twice as long. </blockquote><div><br></div><div>Yikes.</div><div><br></div><div><blockquote class="gmail_quote" style="color:rgb(0,0,0);margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"> - If it helps I could possibly send some code over. Let me know if you'd like to see it.</blockquote><div style="color:rgb(0,0,0)"><br></div><div style="color:rgb(0,0,0)">Yes -- that would be great!</div></div><div style="color:rgb(0,0,0)"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">- I'm curious about (1) - runSessionLocked. Unfamiliar with that.</blockquote><div><br></div><div>The JIT symbol table operations (registering symbol definitions, lodging queries, updating symbol state) are all protected by the session lock in ExecutionSession. The intent is that these operations should be fast relative to compilation of modules, so there shouldn't be too much serialization on the session lock. If lots of tasks are waiting on access to the session lock then we need to look at the performance of the symbol table operations.</div><div><br></div><div>Given what you're seeing, I suspect this is poor performance in the symbol dependence tracking system. We should be able to see that with the logging from (1).</div><div><br></div><div>One other thing to note: If you break your module M up into modules A and B then you'll need to make sure that you issue lookups for symbols in both A and B up-front. If you only look for your entry symbol then you'll trigger compilation of one module, but your second thread will sit idle until the first module reaches the linker and starts issuing lookups for symbols in the second. That serialization will prevent you from seeing any concurrency benefits, even if I fix the dependence tracking performance. To make this easy I'll add a new transform to ExecutionUtils.h to issue these lookups for you.</div><div><br></div><div>Cheers,</div><div>Lang.</div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 4, 2020 at 2:41 PM chris boese <<a href="mailto:chris107565@gmail.com">chris107565@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Lang,<div><div><br></div><div>(1) Are you timing this with a release build or a debug build? ORC uses asserts liberally, including in code that is run under the session lock, and this may decrease parallelism in debug builds.</div><div>- Mostly with release builds. I've only attempted debug builds when trying to take a trace with Valgrind/Vtune.</div><div><br></div><div>(1) Are you using a fixed sized thread pool with an appropriate limit? Compiling too many things in parallel can have negatively impact performance if it leads to memory exhaustion.</div><div>- Yes I've tried as few as 2 threads. Doesn't seem to help.</div><div><br></div><div>(2) Are you loading each module on a different LLVMContext? Modules sharing an LLVMContext cannot be compiled concurrently, as contexts cannot be shared between threads.</div><div>- I've tried both ways, but ya I stuck with separate Contexts per module.</div><div><br></div><div>And some follow up questions: What platform are you running on? Are you using LLJIT or LLLazyJIT? What kind of slow-down do you see relative to single-threaded compilation?</div><div>- Platform is a beefy server (shared among developers) with lots of cores. It's running Ubuntu. So we're using LLLazyJIT, but have laziness turned off by setting CompileWholeModule. So one test I was using took rougly 2-3 min to compile (single module). When splitting the module and compiling, and setting threads to 2, it was taking roughly twice as long. </div><div><br></div><div>Finally, some thoughts: The performance of concurrent compilation has not received any attention at all yet, as I have been busy with other feature work. I definitely want to get this working though. There are no stats or timings collected at the moment, but I can think of a few that i think would be useful and relatively easy to implement: (1) Track time spent under the session lock by adding timers to runSessionLocked, (2) Track time spent waiting on LLVMContexts in ThreadSafeContext, (3) Add a runAs<FunctionType> utility with timers to time execution of JIT functions.</div><div><br></div><div>What are your thoughts? Are there any other tools you would like to see added?</div></div><div>- I'm curious about (1) - runSessionLocked. Unfamiliar with that. Not to sound greedy, but all 3 sound very helpful :)</div><div><br></div><div>- If it helps I could possibly send some code over. Let me know if you'd like to see it.</div><div><br></div><div>Thanks,</div><div>Chris</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 29, 2020 at 6:03 PM Lang Hames <<a href="mailto:lhames@gmail.com" target="_blank">lhames@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Chris,<div><br></div><div>I can think of a couple of things to check up front:</div><div><br></div><div>(1) Are you timing this with a release build or a debug build? ORC uses asserts liberally, including in code that is run under the session lock, and this may decrease parallelism in debug builds.</div><div><br></div><div>(1) Are you using a fixed sized thread pool with an appropriate limit? Compiling too many things in parallel can have negatively impact performance if it leads to memory exhaustion.</div><div><br></div><div>(2) Are you loading each module on a different LLVMContext? Modules sharing an LLVMContext cannot be compiled concurrently, as contexts cannot be shared between threads.</div><div><br></div><div>And some follow up questions: What platform are you running on? Are you using LLJIT or LLLazyJIT? What kind of slow-down do you see relative to single-threaded compilation?</div><div><br></div><div>Finally, some thoughts: The performance of concurrent compilation has not received any attention at all yet, as I have been busy with other feature work. I definitely want to get this working though. There are no stats or timings collected at the moment, but I can think of a few that i think would be useful and relatively easy to implement: (1) Track time spent under the session lock by adding timers to runSessionLocked, (2) Track time spent waiting on LLVMContexts in ThreadSafeContext, (3) Add a runAs<FunctionType> utility with timers to time execution of JIT functions.</div><div><br></div><div>What are your thoughts? Are there any other tools you would like to see added?</div><div><br></div><div>Cheers,</div><div>Lang.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 29, 2020 at 2:12 PM chris boese via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div>We are using the new LLJIT class in our compiler. We have not been successful using the parallel JIT feature. When we tried it previously on multiple modules, our compile-time increased significantly. I don't know if we are using it incorrectly, or that we miss out on optimizations we get when running on a single merged module, but it hasn't worked for us yet. We are pretty far behind HEAD atm, but will try it again soon. </div><div><br></div><div>In the meantime, we are trying to find ways to gauge the compilation time of a module. We pass a single module to the LLJIT instance. Is there is any information we can get during the JIT construction to let us compare against other modules we run through JIT? We're trying to find hot spots or performance issues in our modules. Timers or statistical data would be helpful if they exist during the execution of the JIT engine.</div><div><br></div><div>I imagine parallelizing the JIT will be our best bet for increasing performance, but we have not been able to use that yet.</div><div><br></div><div>Any help/ideas would be appreciated.</div><div><br></div><div>Thanks,</div><div>Chris</div></div>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div>
</blockquote></div>
</blockquote></div>