[llvm-dev] Looking for suggestions: Inferring GPU memory accesses
Johannes Doerfert via llvm-dev
llvm-dev at lists.llvm.org
Sun Aug 23 09:47:55 PDT 2020
a while back we started a project with similar scope.
Unfortunately the development slowed down and the plans to revive it
this summer got tanked by the US travel restrictions.
Anyway, there is some some existing code that might be useful, though in
a prototype stage. While I'm obviously biased, I would suggest we
continue from there.
@Alex @Holger can we put the latest version on github or some other
place to share it, I'm unsure if the code I (might have) access to is
@Ees I attached a recent paper and you might find the following links
* 2017 LLVM Developers’ Meeting: J. Doerfert “Polyhedral Value &
Memory Analysis ” https://youtu.be/xSA0XLYJ-G0
* "Automated Partitioning of Data-Parallel Kernels using Polyhedral
Compilation.", P2S2 2020 (slides and video
Let us know what you think :)
On 8/22/20 9:38 AM, Ees Kee via llvm-dev wrote:
> Hi all,
> As part of my research I want to investigate the relation between the
> grid's geometry and the memory accesses of a kernel in common gpu
> benchmarks (e.g Rodinia, Polybench etc). As a first step i want to
> answer the following question:
> - Given a kernel function with M possible memory accesses. For how
> those M accesses we can statically infer its location given concrete
> for the grid/block and executing thread?
> (Assume CUDA only for now)
> My initial idea is to replace all uses of dim-related values, e.g:
> and index related values, e.g:
> with ConstantInts. Then run constant folding on the result and check how
> many GEPs have constant values.
> Would something like this work or are there complications I am not
> of? I'd appreciate any suggestions.
> P.S i am new to LLVM
> Thanks in advance,
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 932927 bytes
Desc: not available
More information about the llvm-dev