[libc-commits] [PATCH] D151129: [libc] Make ErrnoSetterMatcher handle logging floating point values.

Siva Chandra via Phabricator via libc-commits libc-commits at lists.llvm.org
Thu May 25 11:40:42 PDT 2023


sivachandra added a comment.

In D151129#4373162 <https://reviews.llvm.org/D151129#4373162>, @jhuber6 wrote:

> In D151129#4373139 <https://reviews.llvm.org/D151129#4373139>, @sivachandra wrote:
>
>> In D151129#4373077 <https://reviews.llvm.org/D151129#4373077>, @jhuber6 wrote:
>>
>>> In D151129#4373075 <https://reviews.llvm.org/D151129#4373075>, @sivachandra wrote:
>>>
>>>> I think all of those errors have a single root cause that `EILSEQ` was missing from the list of generic error numbers. I have added it now. Can you give it another try?
>>>
>>> Yes, I think we just need to make the `thread_local` error buffer a regular integer on platforms without `errno`. (Same reason we can't support it in the first place).
>>
>> Few things:
>>
>> 1. This patch is likely titled incorrectly titled because I uploaded when I wanted to quickly show how we can ignore testing for `errno`. I totally happy to remove that part (re-title this patch anyway).
>
> Okay, this prevented running the existing tests on the GPU so it should at least be worked around for this to land.

How is the latest version of this patch?

>> 2. If you want to consider GPU to be a platform without `errno`, that is up to you. But, This question goes beyond `errno`. Is the GPU platform in general a platform without thread local variable support? If yes, we need to say that somewhere and give guidance on how to detect C library function errors.
>
> That's the simplest solution right now. The GPU does have thread local memory, it comes from the stack. It's very limited but it does exist. The problem is that this stack variable would require each executing kernel on the GPU to initialize the variables. There's basically a few ways to get around this if we really wanted to support it.
>
> 1. Preallocate global memory in the GPU loader such that every software thread has a slot.
> 2. Statically allocate `1024` integers in block-local memory and index out of it.
> 3. Perform an `alloca` instruction from the bottom of the stack and initialize it
> 4. Set up true codegen for the GPU that basically does 3 but automatically. This would require a lot of fun with sections like how TLS is handled on Linux to implement properly.
> 5. Ignore it completely.
> 6. Make it a global and have data races
>
>> 3. Is the libc project groups and patches the right forum to discuss/decide about GPU `errno`?
>
> I'm not sure, there hasn't exactly been an outcry from the GPU community for `errno` support. Speaking frankly most users would only *really* care about `malloc`, `free`, and `printf. Most everything else can be seen as supplemental.

I will leave it up to you to drive this discussion in the appropriate forums. Even things like, should there be an `errno.h` in the GPU libc with relevant error macros exposed? For the current work in the libc around this, I only view them as  "workarounds" which will help make progress with making more of the libc available on the GPUs while the fundamental questions are being sorted out.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D151129/new/

https://reviews.llvm.org/D151129



More information about the libc-commits mailing list