[llvm-dev] [RFC][VECLIB] how should we legalize VECLIB calls?

Francesco Petrogalli via llvm-dev llvm-dev at lists.llvm.org
Tue Oct 9 14:45:17 PDT 2018


> On Oct 9, 2018, at 3:43 PM, Renato Golin <renato.golin at linaro.org> wrote:
>
> Hi Francesco,
>
> Thanks for copying me, I missed this thread.
>
> On Tue, 9 Oct 2018 at 19:45, Francesco Petrogalli
> <Francesco.Petrogalli at arm.com> wrote:
>> The functionality provided by the Vector Clone pass [3] goes as follows:
>>
>> 1. The vectorizer is informed via an attribute of the availability of a vector function associated to the scalar call in the scalar loop
>
> I assume this is OMP's pragma SIMD's job, for now. We may want to work
> that out automatically if we see vector functions being defined in
> some header, for example.
>

No, I meant IR attribute. There is an RFC submitted by Intel that describe such attribute: http://lists.llvm.org/pipermail/cfe-dev/2016-March/047732.html


>> 2. The name of the vector function carries the info generated from the `declare simd` directive associated to the scalar declaration/definition.
>
> Headers for C, some text file? for Fortran, OMP 5 for the rest, right?
>

Header for C
Other mechanism (text files?) for Fortran

Both could use OMP 5 when we decide to go along that route.

What do you exactly mean with “OpenMP 5 for the rest”?


>> 3. The vectorizer chooses which vector function to use based in the information generated by the vector-variant attribute associated to the original scale function.
>
> And, I assume, make sure that the types are compatible (args and
> return) according to the language's own promotions and conversions.
>

Yes.

>> This mechanism is modular (clang and opt can be tested separately as the vectorization information are stored in the IR via an attribute), therefore it is superior  to the functionality in Arm compiler for HPC, but it is equivalent in the case of function definition, which is the case we need to interface external vector libraries, whether math libraries or any other kind of vector library.
>
> I assume Clang would just emit the defines / metadata for the vector
> functions so that LLVM can reason with them. If that's the case, then
> any Fortran front-end would have to do the same with whatever is the
> mechanism there.
>

Yes, the information is store in metadata in the IR.

>> As it is, this mechanism cannot be used as a replacement for the VECLIB functionality, because external libraries like SVML, or SLEEF, have their own naming conventions. To this extend the new directive `declare variant` of the upcoming OpenMP 5.0 standard is, in my opinion, the way forward.  This directive allows to re-map the name associated to a `declare simd` declaration/definition to  a new name chosen by the user/library vendor.
>
> I was going to ask about SLEEF. :)
>
> Is it possible that we write shims between VECLIB and SLEEF? So that
> we can already use them before OMP 5 is settled?
>

If I got your question correctly, you are asking wether we can start using SLEEF before we set up this mechanism with OpenMP 5.0. If that the question, the answer is yes. We could use SLEEF by adding  VECLIB option like it is done now for SVML in the TLI, or we could use SLEEF and support Intel and Arm by using the libmvec compatible version of the library, libsleefgnuabi.so - this is my favorite solution as it is based on the vector function ABI standards of Intel and Arm.

>> With this construct it would be able to choose the list of vector functions available in the library by simply tweaking the command line to select the correct portion of the header file shipped with the compiler [6], without the need to maintain lists in the TLI source code, and completely splitting the functionality between frontend and backend, with no dependencies.
>
> What about the costs? An initial estimate would be "whatever the cost
> of the scalar / VF", but that is perhaps too naive, and we could get
> it wrong either way by more than just a bit.
>

There is no way to get the cost of the vector function, other than the wrong assumption that cost(vector version) = cost(scalar version), which is not the case. By the way, why do you think that the cost of the vector version is scalar cost / VF?

We could argue that vectorizing a math function is always beneficial?

> Eventually, I foresee people trying to add heuristics somewhere, and
> that could pollute the IR. It would be good to at least have an idea
> where that would live, so that we can make the best informed decision
> now.
>

Why do you say “pollute the IR”? The heuristics would not be added to the IR, they would be added in the code of the cost model. I am not sure I understand what you mean here.

> cheers,
> --renato

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.


More information about the llvm-dev mailing list