<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jun 6, 2016 at 2:24 AM, David Chisnall via llvm-dev <span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span class="">On 5 Jun 2016, at 21:19, Rui Ueyama via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br>
><br>
> This is a short summary of an experiment that I did for the linker.<br>
><br>
> One of the major tasks of the linker is to copy file contents from input object files to an output file. I was wondering what's the fastest way to copy data from one file to another, so I conducted an experiment.<br>
><br>
> Currently, LLD copies file contents using memcpy (input files and an output file are mapped to memory.) mmap+memcpy is not known as the fastest way to copy file contents.<br>
><br>
> Linux has sendfile system call. The system call takes two file descriptors and copies contents from one to another (it used to take only a socket as a destination, but these days it can take any file.) It is usually much faster than memcpy to copy files. For example, it is about 3x faster than cp command to copy large files on my machine (on SSD/ext4).<br>
><br>
> I made a change to LLVM and LLD to use sendfile instead of memcpy to copy section contents. Here's the time to link clang with debug info.<br>
><br>
> memcpy: 12.96 seconds<br>
> sendfile: 12.82 seconds<br>
><br>
> sendfile(2) was slightly faster but not that much. But if you disable string merging (by passing -O0 parameter to the linker), the difference becomes noticeable.<br>
><br>
> memcpy: 7.85 seconds<br>
> sendfile: 6.94 seconds<br>
><br>
> I think it is because, with -O0, the linker has to copy more contents than without -O0. It creates 2x larger executable than without -O0. As the amount of data the linker needs to copy gets larger, sendfile gets more effective.<br>
><br>
> By the way, gold takes 27.05 seconds to link it.<br>
><br>
> With the results, I'm not going to submit that change. There are two reasons. First, the optimization seems too system-specific, and I'm not yet sure if it's always effective even on Linux. Second, the current implementations of MemoryBuffer and FileOutputBuffer are not sendfile(2)-friendly because they close file descriptors immediately after mapping them to memory. My patch is too hacky to submit.<br>
><br>
> Being said that, the results clearly show that there's room for future optimization. I think we want to revisit it when we want to do a low-level optimization on link speed.<br>
<br>
</span>This approach is only likely to yield a speedup if you are copying more than a page, because then there is the potential for the kernel to avoid a memcpy and just alias the pages in the buffer cache (note: most systems won’t do this anyway, but at least then you’re exposing an optimisation opportunity to the kernel).</blockquote><div><br></div><div>This assumes that the from/to addresses have the same offset modulo the page size, which I'm not sure is ever really the case for input sections and their location in the output.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"> Using the kernel’s memcpy in place of the userspace one is likely to be slightly slower, as kernel memcpy implementations often don’t take advantage of vector operations, to avoid having to save and restore FPU state for each kernel thread, though if you’re having cache misses then these won’t make much difference (and if you’re on x86, depending on the manufacturer, you may hit a pattern that the microcode recognises and have your code replaced entirely with a microcoded memcpy).<br>
<br>
One possible improvement would be to have a custom memcpy that used non-temporal stores, as this memory is likely not to be used at all on the CPU in the near future (though on recent Intel chips, the DMA unit shares LLC with the CPU, so will pull it back into L3 on writeback) and probably not DMA’d for another 10-30 seconds (if it’s sooner, then this can adversely affect performance, because on Intel chips the DMA controller is limited to using a subset of the cache, so having the CPU pull things into cache that are going to be DMA’d out can actually increase performance - ironically, some zero-copy optimisations actually harm performance on these systems). This should reduce cache pressure, as the stores will all go through a single way in the (typically) 8-way associative cache. If this is also the last time that you’re going to read the data, then using non-temporal loads may also help. Note, however, that the interpretation of the non-temporal hints is advisory and some x86 microcode implementations make quite surprising decisions.<br></blockquote><div><br></div><div>I don't think that the performance problem of the memcpy here is Dcache related (it is just a memcpy and so should prefetch well). I clocked that our memcpy to the output is getting < 1GB/s throughput (on a machine that can do >60GB/s DRAM bandwidth; see <a href="http://reviews.llvm.org/D20645#440638">http://reviews.llvm.org/D20645#440638</a>). My guess is that the problem here is more about virtual memory cost (kernel having to fix up page tables, zero-fill, etc.).</div><div><br></div><div>-- Sean Silva</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
David<br>
<div class=""><div class="h5"><br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</div></div></blockquote></div><br></div></div>