<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 27, 2016 at 12:56 PM, Mehdi Amini via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
Disclaimer: I have no plan to break 3.x bitcode compatibility in any way.<br>
<br>
I’m experimenting with ideas to improve the current bitcode serialization and deserialization in general.<br>
I’m interested in improving the speed and the flexibility of the reader/writer, as well as making the code easier to deal with.<br>
<br>
My current griefs against our bitcode format are mainly the lack of built-in random access (which prevents efficient lazy loading), and the implementation which does not help layering the high-level structure of the serialization from what is emitted (and the compression part). On the speed side, the bit level management of record has also non-negligible overhead right now.<br>
The two key issues with the current format that prevents random access are:<br>
<br></blockquote><div><br></div><div>What are most common IR entities that need fast random accesses? </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
- Abbrev are lazily emitted in the stream and can be interleaved with record. You have to scan the whole block to get the context for a record.<br>
- Because records have variable size, you need to parse then in sequence to find the “interesting” one.<br>
<br>
I just finished a quick prototype using Google Flatbuffers ( <a href="http://google.github.io/flatbuffers/" rel="noreferrer" target="_blank">http://google.github.io/<wbr>flatbuffers/</a> ). This is a solution that provide “zero-copy” serialization/deserialization, by avoiding the use of variable length integer and aligning appropriately every field. The structure of the serialization is described using a DSL, and the serialization/deserialization code is generated from this description.<br>
My prototype is a naive description of the IR, but complete enough to handle all the .ll files present in the LLVM validation, and rountrip a 800MB file from the LTO link of Clang itself.<br>
<br>
The results is that the extra flexibility given by the random access is helping beyond the lazy loading: for instance loading metadata is an issue because of forward reference usually, while the random access allows to “recurse” on the operand when loading a metadata. This simplifies the algorithms both on the reader and the writer.<br>
<br>
On the drawback, Flatbuffers seems to target small use cases (not a lot of different type of records or large buffers). The implementation is not totally optimized, for example 40% of the writer time is spend on “vtables” (equivalent of our abbrev) deduplication on some of my test cases.<br>
Also multiple possible obvious size optimizations are not possible with the current Flatbuffers, currently my prototype emit files that are ~2 times larger that our bitcode format. By working on the schema I probably can get this to down 50%, but that may still be too high for us (especially considering that some of these limits are inherent to Flatbuffer implementation itself).<br></blockquote><div><br></div><div><br></div><div>Do you have read/write time comparison data (for sequential and random access)?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
So my current thoughts on the topic is that we could still learn from other recently developed format and bring part of the ideas into our format where it makes sense. For example we don’t need random access in the instruction array, but it’d be appreciable for the metadata though. We'd want zero-copy for a symbol table (see <a href="https://llvm.org/bugs/show_bug.cgi?id=27551" rel="noreferrer" target="_blank">https://llvm.org/bugs/show_<wbr>bug.cgi?id=27551</a> ) but not for the instruction stream.<br>
<br>
I’m also interested in what are other people’s stakes on the bitcode format (other than backward compatibility), what tradeoff are you ready to make on the size vs speed/flexibility for example?<br>
<br></blockquote><div><br></div><div>A hybrid format (using fixed length representation or indirection for parts that need fast random access, the rest uses compression) seems promising. Why can't backward compatibility be kept?</div><div><br></div><div>David</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
—<br>
Mehdi<br>
<br>
______________________________<wbr>_________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-dev</a><br>
</blockquote></div><br></div></div>