[libc-commits] [PATCH] D139839: [libc] Add a loader utility for AMDHSA architectures for testing

Siva Chandra via Phabricator via libc-commits libc-commits at lists.llvm.org
Tue Dec 13 14:21:39 PST 2022


sivachandra added a comment.

In D139839#3991856 <https://reviews.llvm.org/D139839#3991856>, @jhuber6 wrote:

> The current idea is to use these tools to make a separate version of the `LibcTest.cpp` source that is compatible for the GPU. This way we could compile the unit test source and test file after linking in the "startup" code and then launch it on the GPU using the loader to test.

I think, a simple way to get this going is to make `LibcTest.h` (or https://github.com/llvm/llvm-project/blob/main/libc/utils/UnitTest/Test.h) short-circuit to `libc/utils/IntegrationTest/test.h` when building a unit test as an integration test. That way, we don't have to build all the unit test machinery for the GPU at least to begin with. The integration test machinery is very simple, very much like what @JonChesterfield wanted: a collection of truth value assertions with some syntactic sugar.

>> 2. The Linux startup code currently is in the directory named `loader`: https://github.com/llvm/llvm-project/tree/main/libc/loader. I agree that the name "loader" is not appropriate. We have plans to change it to "startup". What you are calling as `amdhsa_crt` should live along side the linux startup code and follow the conventions followed by the linux startup code.
>
> Would that cause this to be exported? It's primarily being proposed for testing but we could probably expose it like any other loader, it would make the GPU behave more like a standard `libc` target which would be nice.

I do not know if it will just work out of the box for you. But, if we can get GPU environment behave more like a normal host environment, it would be easier to maintain and reason for developers not working on the GPU libc. So, I definitely vote for making the GPU procedures similar to the host procedures.

>> 3. As far as running the unit tests on the device, I think what we really want is the ability to be able to run the unit tests as integration tests. We might not be able to run all of the unit tests in that fashion - for example, the math unit tests use MPFR which I would think cannot be made to run on the GPU. So, we can do two different things here: i) Add some integration tests which can be run on the GPU as well as a Linux host. ii) Develop the ability to run any unit test as an integration test. We really want to be explicit in listing targets in the libc project so one approach could be to list integration test targets which point to the unit test sources. And, this can be done for a Linux host and the GPU.
>
> Could you elaborate on the difference in the `libc` source? I think the latter is the option I was going for, for example we would take a `strcmp` test, compile it directly for the GPU architecture along with a modified `LibcTest.cpp` source, and pass it to the loader to see if it fails. This approach might also be useful for Linux to see if we can bootstrap calls to `libc` routines with the loader I'd assume.

I also vote for the latter. So, for `strcmp_test`, we should add an integration test in `test/integration/src/string` which picks up sources from its unit tests in `test/src/string/` and runs them as an integration test.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D139839/new/

https://reviews.llvm.org/D139839



More information about the libc-commits mailing list