[LLVMdev] SIMD trigonometry/logarithms?

Hal Finkel hfinkel at anl.gov
Sun Jan 27 19:58:54 PST 2013


----- Original Message -----
> From: "Michael Gottesman" <mgottesman at apple.com>
> To: "Hal Finkel" <hfinkel at anl.gov>
> Cc: "Dmitry Mikushin" <dmitry at kernelgen.org>, "LLVM Developers Mailing List" <llvmdev at cs.uiuc.edu>
> Sent: Sunday, January 27, 2013 9:23:51 PM
> Subject: Re: [LLVMdev] SIMD trigonometry/logarithms?
> 
> First let me say that I really like the notion of being able to plug
> in .bc libraries into the compiler and I think that there are many
> potential uses (i.e. vector saturation operations and the like). But
> even so it is important to realize the limitations of this approach.
> 
> Generally implementations of transcendental functions require
> platform specific optimizations to get the best performance and
> accuracy. Additionally usually if you implement such operations with
> SIMD the use cases do do not require as much accuracy as a general
> math library routine implying that if you just perform a blind
> vectorization of a math library function you will be giving up a lot
> of potential performance. If speed/accuracy is not an issue to you
> and you just want *SOMETHING* I suppose it could work ok.

I imagined that these bc files would contain a mixture of generic IR and target-specific intrinsics. Completely-generic fallback versions also sound like a useful feature to support, but I'm not sure that bc files are the best way to do that (because we'd need to pre-select the supported vector sizes).

> 
> Something else to consider as well is the possibility of creating an
> interface for plugging in system SIMD libraries. Then one could use
> structural analysis or the like to recognize (lets say) an FFT or a
> Matrix Multiplication and just patch in the relevant routine. On OS
> X you would use Accelerate, on linux you could use MKL or the
> like/etc.

This would also be nice :)

 -Hal

> 
> Just some thoughts.
> 
> Michael
> 
> On Jan 27, 2013, at 10:56 AM, Hal Finkel <hfinkel at anl.gov> wrote:
> 
> > ----- Original Message -----
> >> From: "Dmitry Mikushin" <dmitry at kernelgen.org>
> >> To: "Justin Holewinski" <justin.holewinski at gmail.com>
> >> Cc: "Hal Finkel" <hfinkel at anl.gov>, "LLVM Developers Mailing List"
> >> <llvmdev at cs.uiuc.edu>
> >> Sent: Sunday, January 27, 2013 10:19:42 AM
> >> Subject: Re: [LLVMdev] SIMD trigonometry/logarithms?
> >> 
> >> Hi Justin,
> >> 
> >> I think having .bc math libraries for different backends makes
> >> perfect sense! For example, in case of NVPTX backend we have the
> >> following problem: many math functions that are only available as
> >> CUDA C++ headers could not be easily used in, for instance, GPU
> >> program written in Fortran. On our end we are currently doing
> >> exactly what you proposed: generating math.bc module and then link
> >> it at IR-level with the target application. There is no need for
> >> SIMD, but having .bc math library would still be very important!
> 
> 
> 
> > 
> > I agree. I think that, essentially, all we need is some
> > infrastructure for finding standard bc/ll include files (much like
> > clang can add its own internal include directory).
> > 
> > -Hal
> > 
> >> 
> >> - D.
> >> 
> >> 
> >> 2013/1/27 Justin Holewinski < justin.holewinski at gmail.com >
> >> 
> >> 
> >> 
> >> I'm wondering if it makes sense to instead supply a bc math
> >> library.
> >> I would think it would be easier to maintain and debug, and should
> >> still give you all of the benefits. You could just link with it
> >> early in the optimization pipeline to ensure inlining. This may
> >> also
> >> make it easier to maintain SIMD functions for multiple backends.
> >> 
> >> 
> >> 
> >> 
> >> 
> >> On Sun, Jan 27, 2013 at 8:49 AM, Hal Finkel < hfinkel at anl.gov >
> >> wrote:
> >> 
> >> 
> >> 
> >> 
> >> ----- Original Message -----
> >>> From: "Dimitri Tcaciuc" < dtcaciuc at gmail.com >
> >>> To: llvmdev at cs.uiuc.edu
> >>> Sent: Sunday, January 27, 2013 3:42:42 AM
> >>> Subject: [LLVMdev] SIMD trigonometry/logarithms?
> >>> 
> >>> 
> >>> 
> >>> Hi everyone,
> >>> 
> >>> 
> >>> I was looking at loop vectorizer code and wondered if there was
> >>> any
> >>> current or planned effort to introduce SIMD implementations of
> >>> sin/cos/exp/log intrinsics (in particular for x86-64 backend)?
> >> 
> >> Ralf Karrenberg had implemented some of these as part of his
> >> whole-function vectorization project:
> >> https://github.com/karrenberg/wfv/blob/master/src/utils/nativeSSEMathFunctions.hpp
> >> https://github.com/karrenberg/wfv/blob/master/src/utils/nativeAVXMathFunctions.hpp
> >> 
> >> Opinions on pulling these into the X86 backend?
> >> 
> >> -Hal
> >> 
> >>> 
> >>> 
> >>> Cheers,
> >>> 
> >>> 
> >>> 
> >>> 
> >>> Dimitri.
> >>> _______________________________________________
> >>> LLVM Developers mailing list
> >>> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> >>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >>> 
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >> 
> >> 
> >> 
> >> 
> >> --
> >> 
> >> 
> >> Thanks,
> >> 
> >> 
> >> Justin Holewinski
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >> 
> >> 
> >> 
> > _______________________________________________
> > LLVM Developers mailing list
> > LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> 
> 



More information about the llvm-dev mailing list