<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang="EN-US" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal">Renato,<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Sorry for not replying right away. After the Thanksgiving break, I was in meeting for most of this week and only now catching up on e-mail.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Thanks for raising your concerns and point taken. <o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I am new to the llvm-test-suite and spent most of the day looking through it.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">As I was planning this out in my head, I do think in the first differential we would add the CMake plumbing and a simple opens source program or two (e.g. hand-coded GEMM or something along those lines). SPEC would come in the next batch.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">On a related note, are there any buildbots running the llvm-test-suite or are folks just running llvm-test-suite manually?<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal" style="margin-left:.5in"><b><span style="font-size:12.0pt;color:black">From:
</span></b><span style="font-size:12.0pt;color:black">Renato Golin <rengolin@gmail.com><br>
<b>Date: </b>Thursday, November 26, 2020 at 4:38 AM<br>
<b>To: </b>Johannes Doerfert <johannesdoerfert@gmail.com><br>
<b>Cc: </b>via llvm-dev <llvm-dev@lists.llvm.org>, Nichols Romero <naromero@anl.gov><br>
<b>Subject: </b>Re: [llvm-dev] RFC Adding Fortran tests to LLVM Test Suite<o:p></o:p></span></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<div>
<p class="MsoNormal" style="margin-left:.5in">I don't disagree with your roadmap. If I'm reading correctly, SPEC is only the first benchmark, not the first program. <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<p class="MsoNormal" style="margin-left:.5in">My point was to add the language tests, and perhaps one small program as a benchmark, to test the infrastructure. SPEC could come in the same batch, to show that the CMake glue works for all parts.<o:p></o:p></p>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">I wouldn't add CMake glue with SPEC only, as a first step. That's all I'm saying.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">Cheers, <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-left:.5in">Renato <o:p></o:p></p>
</div>
</div>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
<div>
<div>
<p class="MsoNormal" style="margin-left:.5in">On Wed, 25 Nov 2020, 23:46 Johannes Doerfert, <<a href="mailto:johannesdoerfert@gmail.com">johannesdoerfert@gmail.com</a>> wrote:<o:p></o:p></p>
</div>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal" style="margin-left:.5in"><br>
On 11/25/20 2:33 PM, Renato Golin wrote:<br>
> On Wed, 25 Nov 2020 at 18:10, Johannes Doerfert <<a href="mailto:johannesdoerfert@gmail.com" target="_blank">johannesdoerfert@gmail.com</a>><br>
> wrote:<br>
><br>
>>> If you only add infrastructure to build Fortran programs inside SPEC,<br>
>> then<br>
>>> your change would be biased towards an external benchmark that is private<br>
>>> to some companies.<br>
>> That doesn't make any sense to me.<br>
>> Nobody suggested to change anything "inside SPEC".<br>
>><br>
> Good part of your reply assumes I meant what you say above. I didn't.<br>
><br>
> We're talking past each other. Let me try again.<br>
><br>
> As I said on my original reply, I'm very supportive of the initiative to<br>
> add Fortran to the test suite. To add tests, benchmarks and openmp. This is<br>
> very good news.<br>
><br>
> But the test-suite doesn't have a core ownership, a group that has a plan<br>
> and implements all the parts of a bigger design goal. For many years we<br>
> have tried to unify tests and benchmarks, Kristof did a great job rallying<br>
> people around and so many other people contributed, but once it "works",<br>
> people stop paying attention.<br>
><br>
> I just want to make sure that the overall support for Fortran in the<br>
> test-suite is focused on building tests, benchmarks and other tools that<br>
> are available upstream to all users.<br>
><br>
> If adding Fortran support on the existing SPEC scripts is orthogonal, then<br>
> it shouldn't be part of this discussion. If it's not, then it shouldn't be<br>
> the main driver for the rest of the infrastructure.<br>
><br>
>> Public build-bots will start building those tests and benchmarks<br>
>> (remember,<br>
>>> it's not just benchmarks in there), and you'll need some time to adjust<br>
>>> strategy until it all works across the board.<br>
>> Strategy: If you don't set it up to run Fortran codes, it won't.<br>
>><br>
> I'm going to take this as a tongue-in-cheek comment. The reducionism here<br>
> isn't really helpful.<br>
><br>
> Fortran is just the language, but there are architectures and operating<br>
> systems that need adjusting, too.<br>
><br>
> Fortran benchmark support in the LLVM Test Suite, and literally<br>
>> everything else mentioned in the initial RFC, is beneficial to the<br>
>> community. SPEC support is not something harmful.<br>
>><br>
> We definitely agree on that.<br>
><br>
> How did you come to that conclusion after the initial RFC explicitly listed<br>
>> other benchmarks and apps we want to include in to the test suite?<br>
>><br>
> The original RFC was very clear. Your response was less so.<br>
><br>
> On my reply to the RFC, I said I worry that we're focusing on SPEC too<br>
> early. I'd rather make sure it works upstream before adding SPEC to the<br>
> mix.<br>
><br>
> The reason I tried to convey (and clearly failed) is that the test-suite<br>
> isn't a robust and well designed infrastructure, but a patch-work from<br>
> different approaches over the years, which seems to "work fine" with what<br>
> we have.<br>
><br>
> I may have read that wrong, but it sounded to me as if you were defending<br>
> the prioritisation of SPEC "and some micro benchmarks" over the rest of the<br>
> proposal.<br>
><br>
> I think that's a mistake, because it risks being the main thing that gets<br>
> added and then not much else comes later (priorities change, etc).<br>
><br>
> If my interpretation is wrong, I apologise and we can ignore our past<br>
> exchange. I'm still very supportive of this RFC. :)<br>
<br>
The way I understand you emails is that you argue against the roadmap <br>
because<br>
it lists SPEC as a first proper benchmark/app. This is actually on purpose:<br>
<br>
SPEC is a well tested external benchmark suite with existing support in the<br>
LLVM test suite and it allows for stable results with existing compilers. We<br>
know what compiler works with SPEC, we know the expected outputs, we <br>
know how<br>
to select different input sizes, we know how to glue it to the Test <br>
suite, etc.<br>
<br>
The alternative is to bring in new benchmarks/apps which have multiple other<br>
challenges, as you noted before. In order for us to test the support of the<br>
Fortran plumbing with non-trivial programs, SPEC seems like an ideal <br>
candidate.<br>
<br>
I say this because Nick was working on compiling existing benchmarks and <br>
apps<br>
with Flang (sema only) and that often entails dealing with complex <br>
undocumented<br>
and unmaintained build systems. That is on top of potential issues wrt. <br>
nuerical<br>
stability, non-standard compliant code, ...<br>
<br>
Don't get me wrong, adding other benchmarks is already part of the road <br>
map we are<br>
committed to. We recently added the C/C++ proxy apps, and we are working on<br>
Parallelism/OpenMP (+offloading) support. This is not a one-off effort.<br>
<br>
Please also note that we asked in the mail for benchmark/app ideas so we <br>
know<br>
what to look at next. We are certainly committed to work on this well <br>
past SPEC<br>
support. I know that is true for the ANL people, DOE people, and I'm <br>
very certain<br>
also for the wider Flang community.<br>
<br>
~ Johannes<br>
<br>
<br>
P.S. We're heading into a long Thanksgiving weekend, unclear how <br>
reactive I'll be<br>
the next two days. I hope you'll also have a nice and relaxing <br>
weekend :)<br>
<br>
<br>
><br>
> cheers,<br>
> --renato<br>
><o:p></o:p></p>
</blockquote>
</div>
</div>
</body>
</html>