[LLVMdev] [NVPTX] We need an LLVM CUDA math library, after all

Erik Schnetter schnetter at cct.lsu.edu
Sat Feb 9 13:20:18 PST 2013


The lack of an open-source vector math library (which is what you suggest
here) prompted me to start a project "vecmathlib", available at <
https://bitbucket.org/eschnett/vecmathlib>. This library provides almost
all math functions available in libm, implemented in a vectorised manner,
i.e. suitable for SSE2/AVX/MIC/PTX etc.

In its current state the library has rough edges, e.g. the precision of
many math functions is not yet ideal, and exceptional cases (nan, inf) are
probably not yet all handled correctly. I would be happy if vecmathlib
could be used in LLVM.

For example, assuming that there is a data type "double4" containing a
vector of 4 double precision values, vecmathlib provides a function double4
pow(double4, double4) that implements pow(). In the general case, i.e. if
no system-specific machine instructions are available, this would use
Taylor expansions to calculate pow(x,y)=exp(y*log(x)).

I would be happy to receive feedback on and/or contributions to vecmathlib.

-erik


On Thu, Feb 7, 2013 at 5:08 PM, Dmitry Mikushin <dmitry at kernelgen.org>wrote:

> Hi Justin, gentlemen,
>
> I'm afraid I have to escalate this issue at this point. Since it was
> discussed for the first time last summer, it was sufficient for us for a
> while to have lowering of math calls into intrinsics disabled at DragonEgg
> level, and link them against CUDA math functions at LLVM IR level. Now I
> can say: this is not sufficient any longer, and we need NVPTX backend to
> deal with GPU math.
>
> > There also is no standard libm for PTX.
>
> Yes, that's right, but there is an interesting idea to codegen CUDA math
> headers into LLVM IR and link it with user module at IR level. This method
> gives a perfect degree of flexibility with respect to high-level languages:
> the user no longer needs to deal with headers and can have math right in
> the IR, regardless the language it was lowered from. I can confirm this
> method works for us very well with C and Fortran, but in order to make
> accurate replacements of unsupported intrinsics calls, it needs to become
> aware of NVPTX backend capabilities in the form of:
>
> bool NVPTXTargetMachine::
> isIntrinsicSupported(Function& intrinsic) and
> string NVPTXTargetMachine::whichMathCallReplacesIntrinsic(Function&
> intrinsic)
>
> > I would prefer not to lower such things in the back-end since different
> compilers may want to implement such functions differently based on speed
> vs. accuracy trade-offs.
>
> Who are those different compilers? We are LLVM, the complete compiler
> stack, which should handle these things on its specific preference. Derived
> compilers may certainly think different, and it's their own business to
> change anything they want and never contribute back. We should not forget
> there are a lot of derived projects that use LLVM directly, like KernelGen
> or many of those embedded DSLs recently started flourishing. Their
> completeness and future relies on LLVM. For these reasons, I would strongly
> prefer LLVM/NVPTX should supply a reference GPU math implementation and
> invite you and everyone else to form a joint roadmap to deliver it.
>
> Before we started, IANAL, but something tells me there could be a
> licensing issue about releasing the LLVM IR emitted from CUDA headers.
> Could you please check this with NVIDIA?
>
> Many thanks,
> - D.
>
> 2012/9/6 Justin Holewinski <justin.holewinski at gmail.com>:
> > On 09/06/2012 10:02 AM, Dmitry N. Mikushin wrote:
> >>
> >> Dear all,
> >>
> >> During app compilation we have a crash in NVPTX backend:
> >>
> >> LLVM ERROR: Cannot select: 0x732b270: i64 = ExternalSymbol'__powisf2'
> >> [ID=18]
> >>
> >> As I understand LLVM tries to lower the following call
> >>
> >> %28 = call ptx_device float @llvm.powi.f32(float 2.000000e+00, i32 %8)
> >> nounwind readonly
> >>
> >> to device intrinsic. The table llvm/IntrinsicsNVVM.td does not contain
> >> such intrinsic, however it should be builtin, according to
> >> cuda/include/math_functions.h
> >
> >
> > It actually gets lowered into an external function call.
> >
> >
> >>
> >> Is my understanding correct, and we need simply add the corresponding
> >> definition to llvm/IntrinsicsNVVM.td ? How to do that, what are the
> >> rules?
> >
> >
> > PTX does not have an instruction (or simple series of instructions) that
> > implements pow, so this will not be handled.  I would prefer not to lower
> > such things in the back-end since different compilers may want to
> implement
> > such functions differently based on speed vs. accuracy trade-offs.
> >
> > There also is no standard libm for PTX.  It is up to the higher-level
> > compiler to link against a run-time library that provides functions like
> pow
> > (see include/math_functions.h in a CUDA distribution).
> >
> >>
> >> Thanks,
> >> - D.
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >
> >
> > --
> > Thanks,
> >
> > Justin Holewinski
> >
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
>


-- 
Erik Schnetter <schnetter at cct.lsu.edu>
http://www.perimeterinstitute.ca/personal/eschnetter/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130209/156ef16d/attachment.html>


More information about the llvm-dev mailing list