[LLVMdev] Bitcode parsing performance

Kevin Modzelewski kmod at dropbox.com
Thu Jan 23 16:04:49 PST 2014


I've uploaded some updated files (the stdlib has grown), along with the
individual bitcode sources here:
https://www.dropbox.com/sh/3hjr9j0gjhh7yj0/QU4MNkWwHJ
You should be able to recreate the stdlib.bc and stdlib.stripped.bc files
by doing:
$LLVM/Release/bin/llvm-link
build/{bool,dict,file,float,gc_runtime,int,list,objmodel,rewriter,str,tuple,types,util,math,time}.o.bc
-o stdlib.bc # looks like you need to give the source files in the exact
same order to get the same output
$LLVM/Release/bin/opt -strip-debug -O3 stdlib.bc -o stdlib.stripped.bc

I tested it for revisions 199542 and 199954, and it looks like there's
roughly a 6% decrease in bitcode size and maybe a 10-20% improvement in
loading time, which is pretty nice though it's still about 10x slower than
loading the stripped version (50ms vs 5ms).

Kevin


On Thu, Jan 23, 2014 at 3:38 PM, Manman Ren <manman.ren at gmail.com> wrote:

>
>
>
> On Thu, Jan 23, 2014 at 9:05 AM, Eric Christopher <echristo at gmail.com>wrote:
>
>> Adrian may have handled this recently?
>>
> I believe so.
>
> Manman
>
>
>> On Jan 13, 2014 3:34 PM, "Manman Ren" <manman.ren at gmail.com> wrote:
>>
>>> I briefly looked at the bit code files and some types are not uniqued,
>>> here is one example:
>>> !3903 = metadata !{i32 786454, metadata !3904, null, metadata
>>> !"int64_t", i32 198, i64 0, i64 0, i64 0, i32 0, metadata !2258} ; [
>>> DW_TAG_typedef ] [int64_t] [line 198, size 0, align 0, offset 0] [from long
>>> int]
>>>
>>> !4019 = metadata !{i32 786454, metadata !4020, null, metadata
>>> !"int64_t", i32 198, i64 0, i64 0, i64 0, i32 0, metadata !2258} ; [
>>> DW_TAG_typedef ] [int64_t] [line 198, size 0, align 0, offset 0] [from long
>>> int]
>>>
>>> !3904 = metadata !{metadata !"runtime/int.cpp", metadata
>>> !"/home/kmod/icbd/jit"}
>>> !4020 = metadata !{metadata !"runtime/list.cpp", metadata
>>> !"/home/kmod/icbd/jit"}
>>>
>>> The file names are different for the two typedefs.
>>>
>>> Manman
>>>
>>>
>>> On Fri, Jan 10, 2014 at 12:14 AM, Eric Christopher <echristo at gmail.com>wrote:
>>>
>>>> That was likely type information and should mostly be fixed up. It's
>>>> still not lazily loaded, but is going to be ridiculously smaller now.
>>>>
>>>> -eric
>>>>
>>>> On Fri Jan 10 2014 at 12:11:52 AM, Sean Silva <chisophugis at gmail.com>
>>>> wrote:
>>>>
>>>>> This Summer I was working on LTO and Rafael mentioned to me that debug
>>>>> info is not lazy loaded, which was the cause for the insane resource usage
>>>>> I was seeing when doing LTO with debug info. This is likely the reason that
>>>>> the lazy loading was so ineffective for your debug build.
>>>>>
>>>>> Rafael, am I remembering this right/can you give more information? I
>>>>> expect that this will have to get fixed before pitching LLD as a turnkey
>>>>> LTO solution (not sure where in the priority list it is).
>>>>>
>>>>> -- Sean Silva
>>>>> On Thu, Jan 9, 2014 at 5:37 PM, Kevin Modzelewski <kmod at dropbox.com>wrote:
>>>>>
>>>>> Hi all, I'm trying to reduce the startup time for my JIT, but I'm
>>>>> running into the problem that the majority of the time is spent loading the
>>>>> bitcode for my standard library, and I suspect it's due to debug info.  My
>>>>> stdlib is currently about 2kloc in a number of C++ files; I compile them
>>>>> with clang -g -emit-llvm, then link them together with llvm-link, call opt
>>>>> -O3 on it, and arrive at a 1MB bitcode file.  I then embed this as a binary
>>>>> blob into my executable, and call ParseBitcodeFile on it at startup.
>>>>>
>>>>> Unfortunately, this parsing takes about 60ms right now, which is the
>>>>> main component of my ~100ms time to run on an empty source file (another
>>>>> ~20ms is loading the pre-jit'd image through an ObjectCache).  I thought
>>>>> I'd save some time by using getLazyBitcodeModule, since the IR isn't
>>>>> actually needed right away, but this only reduced the parsing time (ie the
>>>>> time of the actual getLazyBitcodeModule() call) to 45ms, which I thought
>>>>> was surprising.  I also tested computing the bytewise-xor of the bitcode
>>>>> file to make sure that it was fully read into memory, which took about 5ms,
>>>>> so the majority of the time does seem to be spent parsing.
>>>>>
>>>>> Then I switched back to ParseBitcodeFile, but now I added the
>>>>> "-strip-debug" flag to my opt invocation, which reduced the bitcode file
>>>>> down to about 100KB, and reduced the parsing time to 20ms.  What surprised
>>>>> me the most was that if I then switched to getLazyBitcodeModule, the
>>>>> parsing time was cut down to 3ms, which is what I was originally expecting.
>>>>>  So when lazy loading, stripping out the debug info cuts down the
>>>>> initialization time from 45ms to 3ms, which is why I suspect that
>>>>> getLazyBitcodeModule is still parsing all of the debug info.
>>>>>
>>>>>
>>>>> To work around it, I can generate separate builds, one with debug info
>>>>> and one without, but I'd like to avoid doing that. I did some simple
>>>>> profiling of what getLazyBitcodeModule was doing, and it wasn't terribly
>>>>> informative (spends most of its time in parsing-related functions); does
>>>>> anyone have any ideas if this is something that could be fixable or if I
>>>>> should just move on?
>>>>>
>>>>> Thanks,
>>>>> Kevin
>>>>>
>>>>> _______________________________________________
>>>>> LLVM Developers mailing list
>>>>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>>>>
>>>>>
>>>> _______________________________________________
>>>> LLVM Developers mailing list
>>>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>>>
>>>>
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140123/e3248c0a/attachment.html>


More information about the llvm-dev mailing list