[Openmp-dev] Malloc in target region

Hal Finkel via Openmp-dev openmp-dev at lists.llvm.org
Wed Jun 24 18:32:13 PDT 2020


On 6/24/20 7:26 PM, Jeff Hammond wrote:
> Does OpenMP 5 allow calling symbols that are not omp-declare-target?


I'm not sure this is relevant because the implementation gets to provide 
such symbols (and we depend on it doing so in other cases, such as for 
math.h functions).


>
> In any case, I am concerned about malloc interception, having seen 
> that break many times in the past.  OpenMP isn't the only thing that 
> wants to intercept malloc, and nested interception may not be stable.


Why are you bringing up interception? I did not think that anything in 
this message implied intercepting malloc.

  -Hal


>
> Why is omp_alloc not the best option here?  That avoids essentially 
> all of the issues I can think of.
>
> Jeff
>
> On Wed, Jun 24, 2020 at 11:12 AM Hal Finkel via Openmp-dev 
> <openmp-dev at lists.llvm.org <mailto:openmp-dev at lists.llvm.org>> wrote:
>
>     Hi, Jon,
>
>     This is a great question.
>
>     With reverse-offload support, and unified memory, we can support a
>     model where memory allocation triggers reverse offload to the
>     memory allocator on the host. In this mode, everything works as
>     expected. We can, of course, do some static analysis and move
>     allocations that don't escape to use some local allocation scheme,
>     such as what we use without unified memory + reverse offload.
>
>     Without such support, I think that "One heap per device + one for
>     host. Each independent, pointers only valid on the thing that
>     called malloc." makes the most sense. This also, as far as I know,
>     matches what's available in CUDA today.
>
>     "One heap per target offload region" doesn't make sense to me. One
>     might clearly want to allocate in one target region, store the
>     pointers in some data structure, and then access them in some
>     other target region on the same device.
>
>     Thanks again,
>
>     Hal
>
>     On 6/24/20 6:40 AM, Jon Chesterfield via Openmp-dev wrote:
>>     Hello OpenMP,
>>
>>     Our language spec seems fairly light on what it means to call
>>     malloc from a target region.
>>
>>     I can think of a few interpretations:
>>     - One heap per process. Malloc on target or host, free from
>>     either. Writable from either, or some other device. Might mean
>>     intercepting host libc. Convenient, slow.
>>     - One heap per device + one for host. Each independent, pointers
>>     only valid on the thing that called malloc.
>>     - One heap per target offload region, inaccessible from host.
>>     - Some other granularity.
>>
>>     Generally gets faster as the restrictions increase.
>>
>>     Anyone willing to state / guess what they or their users would
>>     expect? Bearing in mind that new is likely to call malloc and
>>     will gain the same properties.
>>
>>     Thanks,
>>
>>     Jon
>>
>>
>>     _______________________________________________
>>     Openmp-dev mailing list
>>     Openmp-dev at lists.llvm.org  <mailto:Openmp-dev at lists.llvm.org>
>>     https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev  <https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev>
>
>     -- 
>     Hal Finkel
>     Lead, Compiler Technology and Programming Languages
>     Leadership Computing Facility
>     Argonne National Laboratory
>
>     _______________________________________________
>     Openmp-dev mailing list
>     Openmp-dev at lists.llvm.org <mailto:Openmp-dev at lists.llvm.org>
>     https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev
>     <https://lists.llvm.org/cgi-bin/mailman/listinfo/openmp-dev>
>
>
>
> -- 
> Jeff Hammond
> jeff.science at gmail.com <mailto:jeff.science at gmail.com>
> http://jeffhammond.github.io/ <http://jeffhammond.github.io/>

-- 
Hal Finkel
Lead, Compiler Technology and Programming Languages
Leadership Computing Facility
Argonne National Laboratory

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/openmp-dev/attachments/20200624/14e3c0d3/attachment-0001.html>


More information about the Openmp-dev mailing list