[llvm-dev] MCJit Runtine Performance
Larry Gritz via llvm-dev
llvm-dev at lists.llvm.org
Fri Feb 5 00:01:22 PST 2016
We have had the same experience with Open Shading Language (OSL). We found that MCJIT was significantly slower than old JIT, and partially for that reason we are (sorry) still using LLVM 3.4 in production.
We have basically two use cases: (1) offline queued batch processing on a computation farm, in which a 50% hit in compilation time (seconds to minutes of CPU time) is not a big deal compared to the many hours of time for a full render (and which even a SLIGHT improvement in runtime of the resulting JITed code makes up for it); but also (2) interactive use in front of a human, where the JIT time is experienced as waiting around for something to happen (mostly at the beginning of the run, when they are antsy to see the first results show up on screen), and having that suddenly get 50% slower is a really big deal.
This is quite different than something like clang, where longer compilation time may annoy developers (or not, they like their coffee breaks) but would never be noticed by end users. Our users wait for the JIT every time they use the software.
I can see that the MCJIT takes much longer than old JIT, but I'm afraid I never profiled it or investigated specifically why this is the case. For unrelated reasons, my users have largely been unable to switch their toolchains to C++11 up until now, so they were also stuck on LLVM 3.4 and thus the need to figure out what was up with MCJIT was not a high priority for me. But now that the switch to C++11 is afoot this year, unlocking more recent LLVM releases to me, MCJIT is on my radar again, so it's the perfect time for this topic to get revived.
> On Feb 4, 2016, at 11:37 PM, Keno Fischer via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> Actually, reading over all of this again, I realize I may have made the
> wrong statement. The runtime regressions we see in julia are actually
> regressions in how long LLVM itself takes to do the compilation (but since
> it happens at run time in the JIT case, I think of it as a regression in
> our running time). We have only noticed occasional regressions in the
> performance of the generated code (which we are in the process of fixing).
> Which kind of regression are you talking about, time taken by LLVM or time
> taken by the LLVM-generated code?
lg at larrygritz.com
More information about the llvm-dev