<div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jun 11, 2020 at 12:07 PM Kai Peter Nacke <<a href="mailto:kai.nacke@de.ibm.com">kai.nacke@de.ibm.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hubert Tong <<a href="mailto:hubert.reinterpretcast@gmail.com" target="_blank">hubert.reinterpretcast@gmail.com</a>> wrote on 10.06.2020 <br>
23:51:54:<br>
<br>
> From: Hubert Tong <<a href="mailto:hubert.reinterpretcast@gmail.com" target="_blank">hubert.reinterpretcast@gmail.com</a>><br>
> To: Kai Peter Nacke <<a href="mailto:kai.nacke@de.ibm.com" target="_blank">kai.nacke@de.ibm.com</a>><br>
> Cc: llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>><br>
> Date: 10.06.2020 23:52<br>
> Subject: [EXTERNAL] Re: [llvm-dev] RFC: Adding support for the z/OS <br>
> platform to LLVM and clang<br>
> <br>
> On Wed, Jun 10, 2020 at 3:11 PM Kai Peter Nacke via llvm-dev <llvm-<br>
> <a href="mailto:dev@lists.llvm.org" target="_blank">dev@lists.llvm.org</a>> wrote:<br>
> 2) Add patches to Clang to allow EBCDIC and ASCII (ISO-8859-1) encoded <br>
> input source files. This would be done at the file open time to allow <br>
the <br>
> rest of Clang to operate as if the source was UTF-8 and so require no <br>
> changes downstream. Feedback on this plan is welcome from the Clang <br>
> community.<br>
> Is there a statement that can be made with respect to accepting <br>
> UTF-8 encoded source files in a z/OS hosted environment or is it <br>
> implied that it works with no changes (and there are no changes that<br>
> will break this functionality)?<br>
> <br>
> Also, would these changes enable the consumption of non-UTF-8 <br>
> encoded source files on Clang as hosted on other platforms?<br>
<br>
The intention is to use the auto-conversion feature from the<br>
language environment. Currently, this platform feature does not<br>
handle conversions of multi-byte encodings, so at this time<br>
consumption of UTF-8 encoded source files is not possible.<br></blockquote><div>If the internal representation is still UTF-8, consuming UTF-8 should involve not converting. It is sounding like the internal representation has been changed to ISO-8859-1 in order to support characters outside those in US-ASCII. If it is indeed internally fixed to ISO-8859-1, then the question of future support for non-Latin (e.g., Greek or Cyrillic) scripts arises. It may be a better tradeoff to leave the internal representation as UTF-8 and restrict the support to the US-ASCII subset for now.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
For the same reason, this does not enable the consumption of<br>
non-UTF-8 encoded source files on other platforms.<br></blockquote><div><br></div><div>Thanks Kai for clarifying. I think this direction leads to some questions around testing.</div><div><br></div><div>The auto-conversion feature makes use of some filesystem-specific features such as filetags that indicate the associated coded character set. In terms of the testing environment on a z/OS system under USS, will there be documentation or scripts available for establishing the necessary file properties on the local tree? It also sounds like there would be some tests that are specific to z/OS-hosted builds that test the conversion facilities.</div><div><br></div><div>Also, if the platform feature does not handle conversions of multi-byte encodings, I am wondering if alternative mechanisms (such as iconv) have been investigated. I suppose there is an issue over how source positions are determined; however, I do not see how an extension of the autoconversion facility would avoid the said issue.<br></div></div></div>