<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>We are executing these library tests on a baremetal target, so
for us it is not possible to copy over batches of files (we can't
even use SCP and SSH). At the moment we use a simple python script
as our %{exec} command that loads the cross-compiled binary into
the targets memory, where it is then executed.</p>
<p>Having the tests compiled in batch and then executed one by one
would work for us however. In general, as long as our use case
continues to work without any major changes to our test
infrastructure, I'm all for trying to improve the testing
performance of the libraries.</p>
<p>Cheers,</p>
<p>Dominik<br>
</p>
<div class="moz-cite-prefix">On 15.07.21 22:04, Stephen Neuendorffer
via llvm-dev wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAO0d_3B2yX_tXnr3N7=JYWSfu=NZpN2iMPEQwW6wq0vDkPE9zw@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<div dir="ltr">We've been looking at some related questions and
have some other options that you might be interested in. First
of all a comment: The idea of 'deferred execution' seems
problematic since it fundamentally changes the semantics of the
report from lit. I'm curious how you are able to use this in
practice to, for instance, check that tests were eventually run
somewhere?
<div><br>
</div>
<div>We have an embedded environment where building LLVM itself
is barely feasible due to resource constraints. As a result
we cross-compile LLVM and generate an installed tarball copied
to the embedded system. However, this means that we can't
test the cross-compiled LLVM executables until we get to the
embedded system.</div>
<div>Our approach has been to factor the test directory into a
hierarchical cmake project. This project then uses the
standard LLVM cmake export mechanisms (i.e. find_project) to
find LLVM. This refactoring has no effect on a regular
in-tree toplevel build. However, we can checkout the LLVM
tree on the embedded system and build *just the test area*
using the installed tarball of LLVM. I think this refactoring
of cmake is something that would be relatively easy to carry
out on the LLVM tree. Relative to your current approach, this
moves the problem of tarballing and remote code execution out
of lit's responsibility and into a more devops/release
responsibility, which makes more sense to me.</div>
<div><br>
</div>
<div>Perhaps you also have other goals, such as partitioning
tests to run on multiple target nodes? I haven't thought too
much about how this would interact.</div>
<div><br>
</div>
<div>Separately, we also have the problem of tests that need to
behave differently in different contexts. e.g.:</div>
<div>RUN: clang --target=my_cross_target ... -o test.elf</div>
<div>RUN: %run% test.elf</div>
<div><br>
</div>
<div>In this case, we'd like to be able to test the compilation
part outside of the target, but when we run the same test on
the target machine, we can compile and run. In this case we
do something similar (as you see above) using a lit
subsitution that varies depending on the cmake environment.
Doing this is somewhat clumsy and I've thought it would be
nicer to move this into lit, allowing the test to be:</div>
<div>
<div><br>
</div>
<div>RUN: clang --target=my_cross_target ... -o test.elf</div>
<div>RUN_ON_TARGET: %run% test.elf</div>
<div><br>
</div>
<div>In this case the behavior of RUN*: lines would be
configurable in the <a href="http://lit.cfg.py"
moz-do-not-send="true">lit.cfg.py</a>. This could
implement part of your current use case (although maybe
there would be impacts on how the reporting is done?)</div>
<div><br>
</div>
<div>Steve</div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Jul 15, 2021 at 11:54
AM Petr Hosek via llvm-dev <<a
href="mailto:llvm-dev@lists.llvm.org" moz-do-not-send="true">llvm-dev@lists.llvm.org</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">This is a topic that came out in our recent
discussions about remote test execution in libc++ and other
runtimes.
<div><br>
</div>
<div>libc++ lit test suite has support for running tests
remotely using a custom executor. compiler-rt has a
similar support. The problem is that this is done in a
very ad-hoc way on a per-command basis.</div>
<div><br>
</div>
<div>The most basic example looks as follows:</div>
<div><br>
</div>
<div> RUN: %{exec} %t.exe</div>
<div><br>
</div>
<div>When executing tests locally, %{exec} would be empty
(or it could be a binary like env). When executing tests
remotely, for example over SSH which is the most common
case, %{exec} is expanded into a <a
href="https://github.com/llvm/llvm-project/blob/b8b23aa/libcxx/utils/ssh.py"
target="_blank" moz-do-not-send="true">script</a> that
uses SCP to copy the binary to the remote target and SSH
to execute it. While we're waiting for the command to
finish, the test execution is blocked.</div>
<div><br>
</div>
<div>When you only have a handful of tests, it's a
reasonable approach, but it becomes a problem with a large
number of tests (as in the case of libc++) because the
overhead of copying and executing tests one-by-one can be
significant. It gets worse if setting up the target test
environment is expensive, which can be the case for some
embedded environments. </div>
<div><br>
</div>
<div>It would be more efficient to bundle up all binaries
(with their dependencies), copy them over to the target
and run them all but that pattern is difficult to express
in lit right now.</div>
<div><br>
</div>
<div><a href="https://reviews.llvm.org/D77657"
target="_blank" moz-do-not-send="true">https://reviews.llvm.org/D77657</a>
is one possible implementation but there are some
unresolved issues described in the details of that change.</div>
<div><br>
</div>
<div>While we could try and workaround some of these issues,
we think that a better solution would be to introduce a
notion of "deferred execution" into lit, so any RUN lines
marked as deferred wouldn't be run immediately and the
test would be reported as having a new status DEFERRED. We
would then ideally have some way of collecting all
deferred commands and providing a custom handler (for
example via TestingConfig) that could do things like
packaging up all binaries and executing them on the target
device.</div>
<div><br>
</div>
<div>We think that such a feature would be generally useful
but I'd like to collect more feedback before we go ahead
with the implementation. Do you think such a feature would
be useful? Is there another way of supporting
batched/deferred execution of test binaries with lit?</div>
</div>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank"
moz-do-not-send="true">llvm-dev@lists.llvm.org</a><br>
<a
href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
LLVM Developers mailing list
<a class="moz-txt-link-abbreviated" href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>
<a class="moz-txt-link-freetext" href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a>
</pre>
</blockquote>
</body>
</html>