[llvm] d6de5f1 - [SVFS] Inject TLI Mappings in VFABI attribute.

Francesco Petrogalli via llvm-commits llvm-commits at lists.llvm.org
Tue Jan 28 06:57:38 PST 2020



> On Jan 28, 2020, at 5:07 AM, Benjamin Kramer <benny.kra at gmail.com> wrote:
> 
> On Mon, Jan 27, 2020 at 3:29 PM Francesco Petrogalli
> <Francesco.Petrogalli at arm.com> wrote:
>> 
>> 
>> 
>>> On Jan 27, 2020, at 7:54 AM, Benjamin Kramer <benny.kra at gmail.com> wrote:
>>> 
>>> On Sat, Nov 16, 2019 at 5:20 PM Francesco Petrogalli via llvm-commits
>>> <llvm-commits at lists.llvm.org> wrote:
>>>> +  // Make function declaration (without a body) "sticky" in the IR by
>>>> +  // listing it in the @llvm.compiler.used intrinsic.
>>>> +  assert(!VectorF->size() && "VFABI attribute requires `@llvm.compiler.used` "
>>>> +                             "only on declarations.");
>>>> +  appendToCompilerUsed(*M, {VectorF});
>>>> +  LLVM_DEBUG(dbgs() << DEBUG_TYPE << ": Adding `" << VFName
>>>> +                    << "` to `@llvm.compiler.used`.\n");
>>>> +  ++NumCompUsedAdded;
>>> 
>>> Sorry for resurrecting an old change, but what's the reason for making
>>> functions sticky? In XLA I'm seeing a bunch of functions that get
>>> inlined and then stick around unused because of being in
>>> llvm.compiler.used without any other users.
>>> 
>>> - Ben
>> 
>> 
>> 
>> Hi Ben,
>> 
>> It is needed for function declarations that need to be carried to the vectorizer even if they don’t have any use before vectorization.
>> 
>> The use case is explained in this RFC: http://lists.llvm.org/pipermail/llvm-dev/2019-June/133484.html
>> 
>> I have highlighted the paragraph that describes why we need that.
>> 
>> ```
>> The IR attribute is used in conjunction with the vector function
>> declarations or definitions that are available in the module. Each
>> mangled name in the `vector-function-abi-attribute` is be associated to
>> a correspondent declaration/definition in the module. Such definition is
>> provided by the front-end. The vector function declaration or definition
>> is passed as an argument to the `llvm.compiler.used` intrinsic to
>> prevent the compiler from removing it from the module (for example when
>> the OpenMP mapping mechanism is used via C header file).
>> ```
>> 
>> If this is causing you problems, we might need to rethink the mechanism that ensures the availability of the vector declaration when auto-vectorizing loops.
> 
> Thanks for the explanation, but I'm still a bit confused on what's
> actually happening. Is it that LLVM removes unused functions before
> vectorization eagerly?
> 


Hi Ben,

I’ll try to give you a practical example.

First, the goal of the RFC: what we need is a way to store in IR the mappings that link scalar functions to their vector counterpart, to be exposed in later LLVM passes for auto vectorization.

The basic use case for this situation is described in the RFC, where the user can decorate a scalar function in C/C++ with an attribute that describes the mappings:

```
float32x4_t vector_sinf(float32x4_t);

float sinf(float) __attribute__((clang_simd_variant(vector_sinf, “nomask”, 4 /=*VLEN*/, “simd")));


// … 
// scalar user code that uses `sinf`
for (….)
  x[i] = sinf(y[I]);
```


The mapping is needed to make sure that the compiler knows that the function `vector_sinf` is a vector variant of `sinf` that:

1. Operates concurrently on 4 lanes (the VLEN argument)
2. Does not have a mask parameter (“nomask”)
3. Is targeting the Advanced SIMD (aka NEON) vector extension of AArch64 (<isa>=“simd”, for other <isa> values, please refer to https://reviews.llvm.org/D72798)

With the information provided by the C attribute, the call to `@sinf` in IR is decorated with the following call site attribute (again, see https://reviews.llvm.org/D72798)

```
float %out = call float @sinf(float %in) #0
;; …
;;
;;
;;
declare <float x4> @vector_sinf(<float x 4> )
attributes #0 = {“vector-function-ai-variant”=“_ZGVnN4v_sinf(vector_sinf)”}
```

If we live the IR as it is, when compiling it at -O2 and higher, the declaration of the function `@vector_sinf` is removed from the module because it is unused.

This wouldn’t be a problem for functions like `sinf`, because even if we remove the declaration of `@vector_sinf`, we could reconstruct its signature from the information stored in the mangled name `_ZGVnN4v_sinf` of the attribute. The problem is that this wouldn’t work for the generic case: the ABIs define the signature of the vector function based on their C signature, which is sometimes lost in the IR signature of the scalar function. For a practical example, see the `Type 1` `Type 2` `Type 3` example in the RFC (http://lists.llvm.org/pipermail/llvm-dev/2019-June/133484.html). All C types generate the same IR type, but the ABIs of the targets requires the vector functions associated to those types to have a different vector type.

To prevent this problem, we decided that the best way to retrieve the correct IR signature of the vector function is to get it from it’s signature generated by the frontend. For this reason, we have to make sure that such signature is always available when the compiler needs it, including at loop vectorization time. 

This meant that we have do make the declaration sticky in the module, by attending it to invocations of @llvm.compiler.used

That’s the reason why you see these unused declaration and their invocation via @llvm.compiler.used: that’s there to say that the module has been compiler knowing that a vector library is available that provides attached to `sinf` invocations the list of functions that can be seen in the attribute “vector-function-abi-variants”.

Notice that the same attribute can be used for_defined_ function (function body is present). In that case, the @llvm.compiler.used intrinsic is not needed because the unused defines are not removed by LLVM before vectorization.

I hope this quick recap helps you understanding what is going on, please let me know if you have more questions (or ideas on how to improve this!).

Kind regards,

Francesco


> - Ben



More information about the llvm-commits mailing list