<div dir="ltr">re: memcpy & builtins<div><br></div><div>> Now given that, not as a macro but is it possible to use the builtins inside llvm-libc implementation maybe so llvm-libc won't have to implement them again? I mean do you see it possible for llvm-libc to get its implementation shared with compiler one somehow behind the scene?</div><div><br></div><div>Sure enough sharing implementation is always a noble goal to pursue. Unfortunately, there are a number of things to consider that makes it hard for memory functions in general.</div><div>I'll try to give an overview of the challenges here. I'll start with the basics - my apologies if you already know most of it.</div><div><br></div><div>Most compilers use an internal representation that is well suited for abstract, relatively high level depiction of operations.</div><div>1. When compiling C, C++, Rust - you name it - the source language is first transformed by the front end into a common representation (the so-called IR).</div><div>2. This IR can be transformed by general - CPU agnostic - passes and progressively refined (lowered) to be closer and closer to the real underlying hardware.<br></div><div>3. Finally code generation occurs (SelectionDAG legalization and optimizations, register allocation, code generation)</div><div><br></div><div><br></div><div>During 1, we can convey the memcpy semantic to the IR in different ways:</div><div> - by using the __bultin_memcpy builtin you are mentioning <a href="https://godbolt.org/z/Pq3Exd" target="_blank">https://godbolt.org/z/Pq3Exd</a></div><div> - by using language constructs that require the use of the memcpy semantic <a href="https://godbolt.org/z/bqY31h" target="_blank">https://godbolt.org/z/bqY31h</a></div><div> - by calling the standard library <a href="https://godbolt.org/z/f9dvda" target="_blank">https://godbolt.org/z/f9dvda</a></div><div><br></div><div>During 2, a bunch of smart optimizations may recognize IR patterns and turn them into the IR memcpy intrinsics</div><div> - loop without optimization <a href="https://godbolt.org/z/cfzTas" target="_blank">https://godbolt.org/z/cfzTas</a></div><div> - loop with optimization <a href="https://godbolt.org/z/1E55rv" target="_blank">https://godbolt.org/z/1E55rv</a></div><div></div><div><br></div><div>This behavior can be disabled by using the "-fno-builtin-memcpy" misnomer <a href="https://godbolt.org/z/7GoxPT" target="_blank">https://godbolt.org/z/7GoxPT</a></div><div>In addition this flag also prevents the frontend from recognizing libc memcpy function <a href="https://godbolt.org/z/dsrTrc" target="_blank">https://godbolt.org/z/dsrTrc</a></div><div>I know this is confusing :-/</div><div><br></div><div>Now the good thing with having the compiler understand memcpy semantic is that it can produce excellent code based on the context:</div><div> - If size is constant and small, the IR optimization passes turn the memcpy intrinsic into loads and stores <a href="https://godbolt.org/z/b81z84" target="_blank">https://godbolt.org/z/b81z84</a></div><div> - But if size is too big, the compiler may delegate to libc <a href="https://godbolt.org/z/jhs7Pj" target="_blank">https://godbolt.org/z/jhs7Pj</a></div><div> - Under some circumstances it can also choose to emit a loop <a href="https://godbolt.org/z/Wq3n99" target="_blank">https://godbolt.org/z/Wq3n99</a></div><div><br></div><div>To sum it up, many constructs can end up being interpreted as having the memcpy semantic by LLVM and depending on the context the resulting code may differ widely.</div><div><br></div><div>Now it is desirable to have a C/C++ implementation of memcpy to be able to leverage optimization techniques like Profile Guided Optimization. For instance when the compiler _sees_ the code, it can reason about it and take inlining decisions, reorder branches, etc...</div><div>The complex interactions I described earlier turns this into a chicken and egg problem where the code may end up calling itself indefinitely <a href="https://godbolt.org/z/eg0p_E">https://godbolt.org/z/eg0p_E</a></div><div><br></div><div>This is why __builtin_memcpy_inline has been designed in the first place (see the original thread about it <a href="https://lists.llvm.org/pipermail/llvm-dev/2019-April/131973.html">https://lists.llvm.org/pipermail/llvm-dev/2019-April/131973.html</a>).</div><div>Its contract is simpler and makes it useful as a building block for creating memcpy functions in pure C/C++.</div><div><br></div><div>> Maybe through some directory inside somewhere that the both compiler and llvm-libc can share their implementations of those?</div><div><br></div><div>It may not be self evident from what I described earlier but the way memcpy is implemented in LLVM really spans a lot of different parts and I'm not sure it is possible to gather it in a single place as regular code without adding ways to communicate intents to the compiler (aka more builtins).</div><div>For instance loop creation has to take place at the IR level (Phi nodes and condition for the loop) but it may be in tension with the availability of accelerators that are particular to backend implementations (think Enhanced REP MOVSB/STOSB for x86 processors)</div><div><br></div><div>I'm aware that this answer is probably really confusing but I hope it helps still.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 2, 2021 at 9:03 PM Ebrahim Byagowi via libc-dev <<a href="mailto:libc-dev@lists.llvm.org" target="_blank">libc-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">Thank you so much.<div><br></div><div>With your explanation now I understand builtins propose better now I think, <span style="color:rgb(0,0,0)">I was also wrong about __builtin_memcpy_inline as it isn't as flexible as a real libc memcpy and needs its third argument to be a constant, "error: argument to '__builtin_memcpy_inline' must be a constant int…" (which I wished it wasn't the case but is understandable why it is) and now I see </span>__builtin_memcpy is also a proxy to libc memcpy which I guess is there just to make compiler code analysis easier.</div><div><br></div><div>Now given that, not as a macro but is it possible to use the builtins inside llvm-libc implementation maybe so llvm-libc won't have to implement them again? I mean do you see it possible for llvm-libc to get its implementation shared with compiler one somehow behind the scene? Maybe through some directory inside somewhere that the both compiler and llvm-libc can share their implementations of those?</div><div><br></div><div>The reason I'm asking is because of a hope I have to see compiler builtins some day to be more capable, which I understand I shouldn't be that hopeful about it, but I think the questions can be thought about regardless.</div><div><br></div><div>Thanks!</div></div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Feb 2, 2021 at 10:38 PM Siva Chandra <<a href="mailto:sivachandra@google.com" target="_blank">sivachandra@google.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Mon, Feb 1, 2021 at 2:58 AM Ebrahim Byagowi <<a href="mailto:ebraminio@gmail.com" target="_blank">ebraminio@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">To describe in maybe some better way, for example this is all I need to get ceilf floorf etc in a .wasm module being built by -nostdlib -nostdinc<div><div><br></div><div>#define ceilf __builtin_ceilf</div><div>#define floorf __builtin_floorf</div><div>#define abs __builtin_abs</div><div>#define fabs __builtin_fabs</div></div><div><br></div><div>so I wondered if I could get more of libc this way (parts make sense ofc) or at least to know what will be the relation between those builtin implementations and the upcoming libc.</div></div></div></blockquote><div><br></div><div>IIUC, there are two parts to your question:</div><div>1. Can we implement a libc function as a macro resolving to a builtin: Not if the standard requires the function to be a real addressable function. One can choose to also provide a macro, but an addressable function declaration should be available. See section 7.1.4 of the C11 standard for more information.</div><div>2. What is the difference between builtins and the libc flavors of the functions: Typically, builtins resolve to the hardware instruction implementing the operation. If a hardware implementation is not available, the compiler builtin calls into the libc itself. With respect to math functions, you will notice this wilh the `long double` flavors. That said, we have implemented the math functions from first principles (as in, the implementations do not assume any special hardware support) in LLVM libc. However, we are just about starting to add machine specific implementations (<a href="https://reviews.llvm.org/D95850" target="_blank">https://reviews.llvm.org/D95850</a>). This should make the libc functions equivalent to the compiler builtins.</div></div></div>
</blockquote></div>
_______________________________________________<br>
libc-dev mailing list<br>
<a href="mailto:libc-dev@lists.llvm.org" target="_blank">libc-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/libc-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/libc-dev</a><br>
</blockquote></div>