[llvm-dev] Looking for suggestions: Inferring GPU memory accesses

Ees Kee via llvm-dev llvm-dev at lists.llvm.org
Sat Aug 22 07:38:49 PDT 2020


Hi all,

As part of my research I want to investigate the relation between the
grid's geometry and the memory accesses of a kernel in common gpu
benchmarks (e.g Rodinia, Polybench etc). As a first step i want to
answer the following question:

- Given a kernel function with M possible memory accesses. For how many of
those M accesses we can statically infer its location given concrete values
for the grid/block and executing thread?

(Assume CUDA only for now)

My initial idea is to replace all uses of dim-related values, e.g:
    __cuda_builtin_blockDim_t::__fetch_builtin_x()
    __cuda_builtin_gridDim_t::__fetch_builtin_x()

and index related values, e.g:
    __cuda_builtin_blockIdx_t::__fetch_builtin_x()
    __cuda_builtin_threadIdx_t::__fetch_builtin_x()

with ConstantInts. Then run constant folding on the result and check how
many GEPs have constant values.

Would something like this work or are there complications I am not thinking
of? I'd appreciate any suggestions.

P.S i am new to LLVM

Thanks in advance,
Ees
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200822/ea03cd6d/attachment.html>


More information about the llvm-dev mailing list