[llvm-dev] Performance of large llvm::ConstantDataArrays

Chris Lovett via llvm-dev llvm-dev at lists.llvm.org
Thu Sep 7 23:06:58 PDT 2017


I'm running into some pretty bad performance in llc.exe when compiling some
large neural networks into code that contains some very
large llvm::ConstantDataArrays, some are { size=102,760,448 }. There's a
small about of actual code for processing the network, but the assembly is
mostly global data.

I'm finding that llc.exe memory spikes up around 30 gigabytes and the job
takes 20-30 minutes compiling from bitcode.  When I looked into it I found
that every single floating point number is loaded into ConstantFP object
where the float is parsed into exponent, mantissa and stored in an integer
part is stored in a heap allocated array, then these are emitted into
MCDataFragments where again more heap allocated data, the float appears to
be stored in SmallVectorImpl<char>.  On top of this I see a lot of
MCFillFragments added because of long double padding.

All up the code I'm compiling ends up with 276 million MCFragments, which
just take a super long time in each phase of compiling (loading from
bitcode, emitting, layout and writing).  With a peak working set of 30
gigabytes each float is taking around 108 bytes!

Is there a more efficient way to do this? Or is there any plan in the works
to handle global data more efficiently in llc ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170907/5978f546/attachment.html>


More information about the llvm-dev mailing list