[PATCH] D53927: [AArch64] Enable libm vectorized functions via SLEEF

Stefan Teleman via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Dec 11 07:40:29 PST 2018


steleman added a comment.

Hi Renato,

My answers inline.

In D53927#1326846 <https://reviews.llvm.org/D53927#1326846>, @rengolin wrote:

> Overall, this change looks ok to me. The tests look good, too.
>
> However, I have a question that I have asked before and don't seem to see a solution here: how do you deal with evolving library support?


I'm not. I'm punting, simply because I can't control SLEEF's potential ABI changes from LLVM. :-) And absent a formal definiton of SLEEF's mangling and ABI, I don't see how this problem can be dealt with in a definitive way.

I am taking it on faith that SLEEF won't keep changing their mangling and ABI.

AFAICS, SLEEF isn't included in any Linux distro. I can't find any reference to it in RH/CentOS/Fedora, or at rpmfind. Or Ubuntu or SuSE.

Which means that those environments who will want to use SLEEF will have to build their own.

The upside is that SLEEF is likely of interest only to the rarefied world of HPC/Super-computing. Those environments can definitely build their own SLEEF and maintain their own binary copy.

> Today, we don't have SVE, so we don't generate SVE bindings, all fine. Say, next year you upstream SVE bindings in SLEEF and then change LLVM to emit those too.
> 
> The objects compiled with LLVM next year won't link against SLEEF compiled today. You can say "compile SLEEF again", but Linux distros have different schedule for different packages, especially when they have different maintainers.

The objects compiled with LLVM next year will still link with SLEEF from today. They just won't be binding to the SVE version of SLEEF. They'll keep binding to the non-SVE SLEEF.

Meaning, ThunderX2/T99 - for example - will never use SVE, because SVE is not available in T99 silicon.

So, future SVE becomes a source of problems only for AArch64 silicon that has SVE support. For those CPU's it becomes a compile-time/link-time choice.

If they want to use a SVE-enabled SLEEF, they'll have to build a SVE-enabled SLEEF to link against. And I suspect they'll have to say '-march=armv8.2+sve' - or something like that - on compile-line.

Those environments can always use two different versions of SLEEF - one with SVE, the other one without, just by using different shared object names and different binding SONAMEs: one for SVE, the other one for not SVE.

clang doesn't auto-magically pass -lsleefgnuabi on link-line, so that's left up to the consumer.

> They cope with the main libraries by validating and releasing a clear set of compatible libraries, and the GNU community participates heavily in that process by testing and patching gcc, glibc, binutils, etc.
> 
> I don't think SLEEF maintainers will have the same stamina, so it's possible that an update in either distro or SLEEF breaks things, they reject the new package, but now, LLVM is broken, too.
> 
> This is not an LLVM problem, per se, but by supporting SLEEF, you'll be transferring the problem to the LLVM community, as we'll be the ones receiving the complaints first.
> 
> This also happens in LLVM proper, when we have just released a new version and people haven't updated SLEEF yet, and we receive a burst of complaints.
> 
> The real question is: is the SLEEF community ready to take on the task of fixing compatibility / linkage issues as soon as they occur? The alternative is what we do with other broken code: deletion. I want to avoid that.
> 
> The other question, inline, is about the argument name. We really want to use the same as existing one, for the sake of least surprise.

I agree with you. ;-) I will change it back to fveclib=SLEEF. ;-)

> Some additional random comments...
> 
> Adding the triple here shows the real dependency that was never added, because SVML assumes Intel hardware, so no problems there.
> 
> The discussion on how to simplify the pattern matching is valid, but at this stage, I think we can keep the existing mechanism.
> 
> I agree that this is bound to increase with SVE but I'm not sure writing a whole new table-gen back end just for this is worthy.
> 
> The code would be slightly longer on the `src` side (instead of `build` side) but it's well organised and clear, so no big deal.
> 
> I tend to agree with @steleman that auto-generating as a loop+replace can be tricky if there are support gaps in the library.
> 
> But further cleanup changes to this patch could prove me wrong. :)

Now about the -fveclib=OpenMP + #pragma omp simd story:

I am fully aware of the OpenMP SIMD Plan[tm] and of the not-really-a-RFC RFC at https://reviews.llvm.org/D54412.

Here are my thoughts on that - i've already expressed some vague version of those thoughts earlier in this changeset:

1. If the -fveclib=OpenMP feature requires changes to existing code to work -- namely inserting #pragma omp simd directives in existing production code -- then it's a non-starter by design. There are many environments in HPC/Super-computing where making changes to existing production code requires an Act Of God and half a million $USD in code re-certification costs.

2. Not every program wants to become dependent on OpenMP. Some programs **really really don't want** OpenMP. There should exist a mechanism for those kinds of programs to vectorize libm functions without involving OpenMP.

3. The story with the "magic" header file. Some environments might consider this "magic" header file a sneaky way of implementing [1] -- namely production source code changes. A header file that wasn't there before is now automagically included, and there's nothing they can do to **not** include it. Boom! - source code change. Happy re-certification planning.

4. I am very happy about the thought of OpenMP being able - at some point in the future - to vectorize things. Programs that already use OpenMP will happily take advantage of this future feature. But, based on my understanding of the current OpenMP reality, this new OpenMP feature won't be available for a while. It's certainly not available now. So, how do we accomplish libm vectorization in the meanwhile?


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D53927/new/

https://reviews.llvm.org/D53927





More information about the llvm-commits mailing list