[libc-commits] [PATCH] D139839: [libc] Add a loader utility for AMDHSA architectures for testing

Joseph Huber via Phabricator via libc-commits libc-commits at lists.llvm.org
Tue Dec 13 07:02:02 PST 2022


jhuber6 added a comment.

Thanks for the helpful feedback Siva.

In D139839#3991165 <https://reviews.llvm.org/D139839#3991165>, @sivachandra wrote:

> I have not gone through every line of code and I am likely not the right person to review the GPU side of things. Few high level comments about organization:
>
> 1. I think what you are proposing here is the ability to add what we call as "integration tests" in the libc today. See https://github.com/llvm/llvm-project/tree/main/libc/test/integration. The idea behind an integration test is that the test executable contains only pieces from the libc. So, even the startup object like `crt1.o` come from the libc. Items like the thread libraries, whose implementation is tightly coupled to implementation of the startup system, are tested using integration tests. Things which can be tested without pulling the startup code are still tested using unit tests. If I understand your patch, you are proposing to use special startup code from the libc to run even the unit tests also. Which is probably implying that you want to run the unit tests also as integration tests on the GPU.

Yes, executing things on the GPU requires at some interaction by the host agent to set up the data and begin execution. The "loader" in this patch provides the minimum amount of infrastructure required to launch a GPU agent's execution, while the "startup" code provides the glue to call `main`. The current idea is to use these tools to make a separate version of the `LibcTest.cpp` source that is compatible for the GPU. This way we could compile the unit test source and test file after linking in the "startup" code and then launch it on the GPU using the loader to test.

Another option would be to use an existing offloading runtime like OpenMP to perform the loading, but I'm hesitant to bring a large established runtime into `libc` and prefer this method for the reasons @JonChesterfield mentioned above.

> 2. The Linux startup code currently is in the directory named `loader`: https://github.com/llvm/llvm-project/tree/main/libc/loader. I agree that the name "loader" is not appropriate. We have plans to change it to "startup". What you are calling as `amdhsa_crt` should live along side the linux startup code and follow the conventions followed by the linux startup code.

Would that cause this to be exported? It's primarily being proposed for testing but we could probably expose it like any other loader, it would make the GPU behave more like a standard `libc` target which would be nice.

> As I see it, the way forward for this patch is this:
>
> 1. I will rename the current `loader` directory to `startup` and you can move the GPU startup implementation there. As it stands in this patch, it is a trivial implementation so you can land it separately after I do the renaming.

Sounds good.

> 2. You can land the `DeviceLoader` implementation as a next step - GPU/AMD experts should review that. Keeping it under `libc/utils` sounds good but may be it should live in a nested directory `libc/utils/gpu`.

@JonChesterfield is the resident HSA expert, we can review that separately but moving the directories sounds good.

> 3. As far as running the unit tests on the device, I think what we really want is the ability to be able to run the unit tests as integration tests. We might not be able to run all of the unit tests in that fashion - for example, the math unit tests use MPFR which I would think cannot be made to run on the GPU. So, we can do two different things here: i) Add some integration tests which can be run on the GPU as well as a Linux host. ii) Develop the ability to run any unit test as an integration test. We really want to be explicit in listing targets in the libc project so one approach could be to list integration test targets which point to the unit test sources. And, this can be done for a Linux host and the GPU.

Could you elaborate on the difference in the `libc` source? I think the latter is the option I was going for, for example we would take a `strcmp` test, compile it directly for the GPU architecture along with a modified `LibcTest.cpp` source, and pass it to the loader to see if it fails. This approach might also be useful for Linux to see if we can bootstrap calls to `libc` routines with the loader I'd assume.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D139839/new/

https://reviews.llvm.org/D139839



More information about the libc-commits mailing list