[llvm-dev] [RFC][VECLIB] how should we legalize VECLIB calls?
Saito, Hideki via llvm-dev
llvm-dev at lists.llvm.org
Tue Oct 9 15:14:45 PDT 2018
I'm all for discussing the overall vectorized function call mechanism issue. Looking forward to have a fruitful discussion next week. While we are discussing,
we can try listing up the interesting cases (like longer than target full vector) and see what we need to add on top of https://reviews.llvm.org/D40575
that my colleague worked on.
In the meantime, another colleague of mine created https://reviews.llvm.org/D53035 for SVML legalization. This is probably as much as we can do w/o touching
the VECLIB/TLI and LV's dependency on using one common VF in many places. Would also be a good ground for the discussion.
>Eventually, I foresee people trying to add heuristics somewhere, and that could pollute the IR. It would be good to at least have an idea where that would live, so that we can make the best informed decision now.
It would be really nice if we can let LLVM read a table provided by the library implementations. In addition to the per-target availability for the VF, IMF attribute (see https://lists.llvm.org/pipermail/llvm-dev/2016-March/097862.html) would be a good candidates for the table entries. Cost could also be.
Thanks,
Hideki
-----Original Message-----
From: Renato Golin [mailto:renato.golin at linaro.org]
Sent: Tuesday, October 09, 2018 1:43 PM
To: Francesco Petrogalli <Francesco.Petrogalli at arm.com>
Cc: Saito, Hideki <hideki.saito at intel.com>; Hal Finkel <hfinkel at anl.gov>; rob.lougher at gmail.com; Nema, Ashutosh <Ashutosh.Nema at amd.com>; LLVM Dev <llvm-dev at lists.llvm.org>; Masten, Matt <matt.masten at intel.com>; Davide Italiano <dccitaliano at gmail.com>; n-sibata at is.naist.jp; Tian, Xinmin <xinmin.tian at intel.com>
Subject: Re: [llvm-dev] [RFC][VECLIB] how should we legalize VECLIB calls?
Hi Francesco,
Thanks for copying me, I missed this thread.
On Tue, 9 Oct 2018 at 19:45, Francesco Petrogalli <Francesco.Petrogalli at arm.com> wrote:
> The functionality provided by the Vector Clone pass [3] goes as follows:
>
> 1. The vectorizer is informed via an attribute of the availability of
> a vector function associated to the scalar call in the scalar loop
I assume this is OMP's pragma SIMD's job, for now. We may want to work that out automatically if we see vector functions being defined in some header, for example.
> 2. The name of the vector function carries the info generated from the `declare simd` directive associated to the scalar declaration/definition.
Headers for C, some text file? for Fortran, OMP 5 for the rest, right?
> 3. The vectorizer chooses which vector function to use based in the information generated by the vector-variant attribute associated to the original scale function.
And, I assume, make sure that the types are compatible (args and
return) according to the language's own promotions and conversions.
> This mechanism is modular (clang and opt can be tested separately as the vectorization information are stored in the IR via an attribute), therefore it is superior to the functionality in Arm compiler for HPC, but it is equivalent in the case of function definition, which is the case we need to interface external vector libraries, whether math libraries or any other kind of vector library.
I assume Clang would just emit the defines / metadata for the vector functions so that LLVM can reason with them. If that's the case, then any Fortran front-end would have to do the same with whatever is the mechanism there.
> As it is, this mechanism cannot be used as a replacement for the VECLIB functionality, because external libraries like SVML, or SLEEF, have their own naming conventions. To this extend the new directive `declare variant` of the upcoming OpenMP 5.0 standard is, in my opinion, the way forward. This directive allows to re-map the name associated to a `declare simd` declaration/definition to a new name chosen by the user/library vendor.
I was going to ask about SLEEF. :)
Is it possible that we write shims between VECLIB and SLEEF? So that we can already use them before OMP 5 is settled?
> With this construct it would be able to choose the list of vector functions available in the library by simply tweaking the command line to select the correct portion of the header file shipped with the compiler [6], without the need to maintain lists in the TLI source code, and completely splitting the functionality between frontend and backend, with no dependencies.
What about the costs? An initial estimate would be "whatever the cost of the scalar / VF", but that is perhaps too naive, and we could get it wrong either way by more than just a bit.
Eventually, I foresee people trying to add heuristics somewhere, and that could pollute the IR. It would be good to at least have an idea where that would live, so that we can make the best informed decision now.
cheers,
--renato
More information about the llvm-dev
mailing list