[libc-commits] [PATCH] D146994: [libc] Support setting 'native' GPU architecture for libc

Joseph Huber via Phabricator via libc-commits libc-commits at lists.llvm.org
Tue Mar 28 12:00:01 PDT 2023


jhuber6 added inline comments.


================
Comment at: libc/cmake/modules/prepare_libc_gpu_build.cmake:76
+set(gpu_test_architecture "")
+if(LIBC_GPU_TEST_ARCHITECTURE)
+  set(gpu_test_architecture ${LIBC_GPU_TEST_ARCHITECTURE})
----------------
tra wrote:
> jhuber6 wrote:
> > tra wrote:
> > > jhuber6 wrote:
> > > > tra wrote:
> > > > > Is that a singular architecture, or can we supply a list? 
> > > > > 
> > > > > We may want to be able to build for a set of user-specified GPUs and, if I can dream out loud, allow specifying which GPU to run them on, so I could just do `niunja check-libc-sm_70` and it would run the tests with CUDA_VISIBLE_DEVICES=<ID of sm_70 GPU>
> > > > > 
> > > > > 
> > > > > 
> > > > Right now it's a singular architecture. The reason for this is because the internal objects used for testing and the ones that will be exported via the `libcgpu.a` library are separate compilations. Since the existing `libc` testing architecture only ever expected a single executable, it was easiest to implement it this way. Basically, the `libcgpu.a` library contains `N` architectures packed into a single file and the internal testing implementation just builds for one. I don't think it's impossible to support what you're asking, it would just require a lot more CMake.
> > > Can we build a set of per-GPU test executables and then just pick one of them to run libc tests? 
> > > 
> > > 
> > > 
> > > 
> > So, `libc` works by making a bunch of target according to the directories and filename. So we compile some file and create the CMake target `libc.src.string.memcmp`. We then attach the compiled files to this target via some CMake property. We could potentially just attach several files to different properties and use the list of architectures to pull them out while building the tests. I didn't spend much time on changing it because I figured it would be sufficient to just test a single one at a time. But it would be interesting to be able to test an AMD and NVIDIA GPU at  the same time.
> My strawman idea was to, roughly, wrap the part of cmake which generates test targets into a loop iterating over GPUs, which would create the same targets, but with a GPU-specifc suffix, which should in theory make it a largely mechanical change. But I'm also not familiar with the details of libc cmake, so if it's more invasive than that, targeting single arch for tests is fine.
> It just looks a bit odd, considering that we do allow targeting multiple GPUs for the library itself. One would assume that we'd want to test multiple GPU variants, too.
Yeah, it's mostly just a convenience. The test files are built differently, they're intended to be compiled directly to a GPU image and then executed via the loader. The GPU library on the other hand is all LLVM-IR for LTO that's packaged into a fatbinary for the "new driver". It's certainly doable, but it's not very high on my priorities right now.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146994/new/

https://reviews.llvm.org/D146994



More information about the libc-commits mailing list