<div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Sep 7, 2021 at 9:15 AM Johannes Doerfert <<a href="mailto:johannesdoerfert@gmail.com" target="_blank">johannesdoerfert@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">+bump<br>
<br>
Jon did respond positive to the proposal. I think the table implementation<br>
vs the "implemented_by" implementation is something we can experiment with.<br>
I'm in favor of the latter as it is more general and can be used in other<br>
places more easily, e.g., by providing source annotations. That said, having<br>
the table version first would be a big step forward too.<br>
<br>
I'd say, if we hear some other positive voices towards this we go ahead with<br>
patches on phab. After an end-to-end series is approved we merge it <br>
together.<br>
<br>
That said, people should chime in if they (dis)like the approach to get math<br>
optimizations (and similar things) working on the GPU.<br></blockquote><div><br></div><div><div class="gmail_default" style="font-family:verdana,sans-serif">I do like this approach for CUDA and NVPTX. I think HIP/AMDGPU may benefit from it, too (+cc: yaxun.liu@).</div></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div><div class="gmail_default" style="font-family:verdana,sans-serif">This will likely also be useful for things other than math functions.</div><div class="gmail_default" style="font-family:verdana,sans-serif">E.g. it may come handy for sanitizer runtimes (+cc: eugenis@) that currently rely on LLVM *not* materializing libcalls they can't provide when they are building the runtime itself.<br></div></div><div class="gmail_default" style="font-family:verdana,sans-serif"><br></div><div class="gmail_default" style="font-family:verdana,sans-serif">--Artem</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
~ Johannes<br>
<br>
<br>
On 4/29/21 6:25 PM, Jon Chesterfield via llvm-dev wrote:<br>
>> Date: Wed, 28 Apr 2021 18:56:32 -0400<br>
>> From: William Moses via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br>
>> To: Artem Belevich <<a href="mailto:tra@google.com" target="_blank">tra@google.com</a>><br>
>> ...<br>
>><br>
>> Hi all,<br>
>><br>
>> Reviving this thread as Johannes and I recently had some time to take a<br>
>> look and do some additional design work. We'd love any thoughts on the<br>
>> following proposal.<br>
>><br>
> Keenly interested in this. Simplification (subjective) of the metadata<br>
> proposal at the end. Some extra background info first though as GPU libm is<br>
> a really interesting design space. When I did the bring up for a different<br>
> architecture ~3 years ago, iirc I found the complete set:<br>
> - clang lowering libm (named functions) to intrinsics<br>
> - clang lowering intrinsic to libm functions<br>
> - optimisation passes that transform libm and ignore intrinsics<br>
> - optimisation passes that transform intrinsics and ignore libm<br>
> - selectiondag represents some intrinsics as nodes<br>
> - strength reduction, e.g. cos(double) -> cosf(float) under fast-math<br>
><br>
> I then wrote some more IR passes related to opencl-style vectorisation and<br>
> some combines to fill in the gaps (which have not reached upstream). So my<br>
> knowledge here is out of date but clang/llvm wasn't a totally consistent<br>
> lowering framework back then.<br>
><br>
> Cuda ships an IR library containing functions similar to libm. ROCm does<br>
> something similar, also IR. We do an impedance matching scheme in inline<br>
> headers which blocks various optimisations and poses some challenges for<br>
> fortran.<br>
><br>
> *Background:*<br>
><br>
>> ...<br>
>> While in theory we could define the lowering of these intrinsics to be a<br>
>> table which looks up the correct __nv_sqrt, this would require the<br>
>> definition of all such functions to remain or otherwise be available. As<br>
>> it's undesirable for the LLVM backend to be aware of CUDA paths, etc, this<br>
>> means that the original definitions brought in by merging libdevice.bc must<br>
>> be maintained. Currently these are deleted if they are unused (as libdevice<br>
>> has them marked as internal).<br>
>><br>
> The deleting is it's own hazard in the context of fast-math, as the<br>
> function can be deleted, and then later an optimisation creates a reference<br>
> to it, which doesn't link. It also prevents the backend from (safely)<br>
> assuming the functions are available, which is moderately annoying for<br>
> lowering some SDag ISD nodes.<br>
><br>
> 2) GPU math functions aren't able to be optimized, unlike standard math<br>
><br>
>> functions.<br>
>><br>
> This one is bad.<br>
><br>
> *Design Constraints:*<br>
>> To remedy the problems described above we need a design that meets the<br>
>> following:<br>
>> * Does not require modifying libdevice.bc or other code shipped by a<br>
>> vendor-specific installation<br>
>> * Allows llvm math intrinsics to be lowered to device-specific code<br>
>> * Keeps definitions of code used to implement intrinsics until after all<br>
>> potential relevant intrinsics (including those created by LLVM passes) have<br>
>> been lowered.<br>
>><br>
> Yep, constraints sound right. Back ends can emit calls to these functions<br>
> too, but I think nvptx/amdgcn do not. Perhaps they would like to be able to<br>
> in places.<br>
><br>
> *Initial Design:*<br>
><br>
>> ... metadata / aliases ...<br>
>><br>
> Design would work, lets us continue with the header files we have now.<br>
> Avoids some tedious programming, i.e. if we approached this as the usual<br>
> back end lowering, where intrinsics / isd nodes are emitted as named<br>
> function calls. That can be mostly driven by a table lookup as the function<br>
> arity is limited. It is (i.e. was) quite tedious programming that in ISel.<br>
> Doing basically the same thing for SDag + GIsel / ptx + gcn, with<br>
> associated tests, is also unappealing.<br>
><br>
> The set of functions near libm is small and known. We would need to mark<br>
> 'sin' as 'implemented by' slightly different functions for nvptx and<br>
> amdgcn, and some of them need thin wrapper code (e.g. modf in amdgcn takes<br>
> an argument by pointer). It would be helpful for the fortran runtime<br>
> libraries effort if the implementation didn't use inline code in headers.<br>
><br>
> There's very close to a 1:1 mapping between the two gpu libraries, even<br>
> some extensions to libm exist in both. Therefore we could write a table,<br>
> {llvm.sin.f64, "sin", __nv_sin, __ocml_sin},<br>
> with NULL or similar for functions that aren't available.<br>
><br>
> A function level IR pass, called late in the pipeline, crawls the call<br>
> instructions and rewrites based on simple rules and that table. That is,<br>
> would rewrite a call to llvm.sin.f64 to a call to __ocml_sin. Exactly the<br>
> same net effect as a header file containing metadata annotations, except we<br>
> don't need the metadata machinery and we can use a single trivial IR pass<br>
> for N architectures (by adding a column). Pass can do the odd ugly thing<br>
> like impedance match function type easily enough.<br>
><br>
> The other side of the problem - that functions once introduced have to hang<br>
> around until we are sure they aren't needed - is the same as in your<br>
> proposal. My preference would be to introduce the libdevice functions<br>
> immediately after the lowering pass above, but we can inject it early and<br>
> tag them to avoid erasure instead. Kind of need that to handle the<br>
> cos->cosf transform anyway.<br>
><br>
> Quite similar to the 'in theory ... table' suggestion, which I like because<br>
> I remember it being far simpler than the sdag rewrite rules.<br>
><br>
> Thanks!<br>
><br>
> Jon<br>
><br>
><br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
> <a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr">--Artem Belevich</div></div></div>