[PATCH] D78655: [HIP] Let lambda be host device by default
Artem Belevich via Phabricator via cfe-commits
cfe-commits at lists.llvm.org
Thu Apr 30 15:39:06 PDT 2020
tra added a comment.
In D78655#2013616 <https://reviews.llvm.org/D78655#2013616>, @pfultz2 wrote:
> > I.e. if I pass a mutable lambda by reference to the GPU kernel
>
> I dont think we are enabling passing host objects by reference through functions. Although it could be possible to capture the mutable lambda by reference by another lambda.
You're not enabling it, but I don't think you prevent it either. In any case, the point was that we don't have a consistent way to handle capturing lambdas in CUDA&HIP. Hence my suggestion for not using it in the tests which would implicitly suggest otherwise.
>
>
>> will the same lambda called on host do the same thing when it's called on the device?
>
> Yes, just as the same as capturing a host variable by reference and using it on the device.
I don't think I understand. Capturing happens when/where the lambda object is created. It captures addresses as seen at the point in time on the processor which executed the constructor.
When the reference to the lambda object is passed to another processor all bets are generally off. Again, besides the point of not using capturing lambda in the test it's somewhat irrelevant for this patch.
>
>
>> In principle it would work if GPU and host operate un a uniform memory
>
> A unified memory is not necessary. What is needed is a coordination between the compiler and runtime.
>
> We dont support capturing host variable by reference, so maybe we can restrict the implicit HD to lambdas that don't capture by reference?
Another point that capturing lambdas are not something ready for the prime time.
>
>
>> According to cppreference, it's only true since C++17 and, AFAICT, only for capture-less lambdas.
>
> You can capture as well, if its in a `constexpr` context.
You can, but the point is that the lambda's `operator()`is not *always* constexpr and your assertion does not change this.
>
>
>> Considering they are not always constexpr, this assertion is not true, either.
>
> Yes, we seem to delay this. It is always HD but not always emitted for both host and device.
Could you elaborate? I'm not sure what you mean by `we seem to delay this` and what does it have to do with the assertion that lambdas are not always `constexpr` by default?
>
>
>> If/when operator() does get constexpr treatment by compiler, we should already derive HD attributes from constexpr. If we do not, then that's what needs to be fixed.
>
> How does the compiler implement this? Does it add `constexpr` attribute onto the operator() or does the constexpr-evalutation visits the lambda as if it were `constexpr`? It seems the latter would be more effecient, and it would be similar to what we are doing with HD. The only difference is that a function can be overloaded with `__host__` and `__device__` whereas that is not possible with `constexpr`. So a difference could be detected by the user, but maybe that doesn't matter
I think it's done here:
https://github.com/llvm/llvm-project/blob/master/clang/lib/Sema/SemaCUDA.cpp#L557
We basically slap implicit HD on constexpr functions when we process function declarations. It's likely that lambdas may go through a different code path and miss this.
>
>
>> That at least would make sense from consistency standpoint as we currently do treat all other constexpr functions as HD.
>
> I mean consistent across the different attributes not in the interpretation of constexpr. A lambda that only calls constexpr functions implicitly has `constexpr` attribute. So, a lambda that only calls device functions(or HD) should implicitly have the `__device__` attribute.
Again, it's a subject for a wider discussion beyond the scope of this patch. Right now we're only dealing with the question of whether lambdas should always be HD by default.
I'm OK with making `constexpr` lambdas HD as it would match how we handle regular functions. I don't think non-constexpr lambdas should be HD.
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D78655/new/
https://reviews.llvm.org/D78655
More information about the cfe-commits
mailing list