<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Wed, May 31, 2017 at 10:44 AM Matthias Braun via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
> On May 31, 2017, at 4:06 AM, Pavel Labath <<a href="mailto:labath@google.com" target="_blank">labath@google.com</a>> wrote:<br>
><br>
> Thank you all for the pointers. I am going to look at these to see if<br>
> there is anything that we could reuse, and come back. In the mean<br>
> time, I'll reply to Mathiass's comments:<br>
><br>
> On 26 May 2017 at 19:11, Matthias Braun <<a href="mailto:mbraun@apple.com" target="_blank">mbraun@apple.com</a>> wrote:<br>
>>> Based on a not-too-detailed examination of the lit codebase, it does<br>
>>> not seem that it would be too difficult to add this capability: During<br>
>>> test discovery phase, we could copy the required files to the remote<br>
>>> host. Then, when we run the test, we could just prefix the run command<br>
>>> similarly to how it is done for running the tests under valgrind. It<br>
>>> would be up to the user to provide a suitable command for copying and<br>
>>> running files on the remote host (using rsync, ssh, telnet or any<br>
>>> other transport he chooses).<br>
>><br>
>> This seems to be the crux to me: What does "required files" mean?<br>
>> - All the executables mentioned in the RUN line? What llvm was compiled as a library, will we copy those too?<br>
> For executables, I was considering just listing them explicitly (in<br>
> lit.local.cfg, I guess), although parsing the RUN line should be<br>
> possible as well. Even with RUN parsing, I expect we would some way to<br>
> explicitly add files to the copy list (e.g. for lldb tests we also<br>
> need to copy the program we are going to debug). </blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
><br>
> As for libraries, I see a couple of solutions:<br>
> - declare these configurations unsupported for remote executions<br>
> - copy over ALL shared libraries<br>
> - have automatic tracking of runtime dependencies - all of this<br>
> information should pass through llvm_add_library macro, so it should<br>
> be mostly a matter of exporting this information out of cmake.<br>
> These can be combined in the sense that we can start in the<br>
> "unsupported" state, and then add some support for it once there is a<br>
> need for it (we don't need it right now).<br>
Sounds good. An actively managed list of files to copy in the lit configuration is a nice simple solution provided we have some regularily running public bot so we can catch missing things. But I assume setting up a bot was your plan anyway.<br>
<br>
><br>
>> - Can tests include other files? Do they need special annotations for that?<br>
> My initial idea was to just copy over all files in the Inputs folder.<br>
> Do you know of any other dependencies that I should consider?<br>
I didn't notice that we had already developed a convention with the "Inputs" folders, so I guess all that is left to do is making sure all tests actually follow that convention.<br></blockquote><div><br>The Google-internal execution of LLVM's tests relies on this property - so at least for the common tests and the targets Google cares about, this property is pretty well enforced.<br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
><br>
>><br>
>> As another example: The llvm-testsuite can perform remote runs (test-suite/litsupport/remote.py if you want to see the implementation) that code makes the assumption that the remote devices has an NFS mount so the relevant parts of the filesystem look alike on the host and remote device. I'm not sure that is the best solution as NFS introduces its own sort of flakiness and potential skew in I/O heavy benchmarks but it avoids the question of what to copy to the device.<br>
><br>
> Requiring an NFS mount is a non-starter for us (no way to get an<br>
> android device to create one), although if we would be able to hook in<br>
> a custom script which does a copy to simulate the "mount", we might be<br>
> able to work with it. Presently I am mostly thinking about correctness<br>
> tests, and I am not worried about benchmark skews<br>
<br>
Sure, I don't think I would end up with an NFS mount strategy if I would start fresh today. Also the test-suite benchmarks (esp. the SPEC) ones tend to have more complicated harder to track inputs.<br>
<br>
- Matthias<br>
<br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div></div>