[llvm-commits] [llvm] r162034 - /llvm/trunk/include/llvm/Object/ELF.h

Chandler Carruth chandlerc at google.com
Thu Aug 16 13:54:16 PDT 2012


I'm adding the dev list to this discussion, as it's a bit meta, and not
specific to these patches.

On Thu, Aug 16, 2012 at 12:55 PM, Will Schmidt <will_schmidt at vnet.ibm.com>wrote:

> On Thu, 2012-08-16 at 15:06 -0400, Rafael EspĂ­ndola wrote:
> > On 16 August 2012 14:33, Chandler Carruth <chandlerc at google.com> wrote:
> > > Test cases?
> >
> > We discussed this a bit on IRC. It is my impression that we already
> > went too far with PPC ELF without testing. As Roman itself noticed two
> > days ago, it is possible to break hello word on ppc elf and not notice
> > it:
>
> >
> >
> http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20120813/148239.html
> >
> > We have been too slow to add tests, even when we noticed that entire
> > functions could be deleted and the tests would still pass:
> >
> >
> http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20111219/133931.html
> > (which btw, is still the case on trunk).
> >
> > So sorry for the current state for those starting to hack now on pcc
> > elf, but to make sure we are in a better position tomorrow I think we
> > have to start requiring tests as features are added and hopefully also
> > cover the backlog we have so far.
>
> Hi all,
> The changes so far are an attempt to get existing test cases to run.  So
> I'd argue against the changes being considered new features.  :-)
>

Yea, new features is probably the wrong word. Let's say "new code". =]


> 2003-01-04-ArgumentBug.ll is the one chosen at random that Adhemerval
> has been focused on.  That wasn't called out explicitly, and we do
> expect some number of test cases to begin functioning better once this
> initial batch of patches gets worked into the tree.
>

This is good, but where is this test? :: runs find ::  Ah, its in the JIT.
Ok, it's not clear that this is the best test to start with, but let's not
get ahead of ourselves...


> What got us in trouble here is that we didn't understand the
> 'test-suite' as being a separate batch of tests from 'make check', and
> simply didn't know to run it.  (Until after Roman hit them and reported
> the regressions to us).


The test-suite is good to run, but it's not really what Rafael or I are
referring to here.

Tests in LLVM can be broken down into a few categories:

1) Direct unittests of core APIs in llvm/unittests
2) Direct testing of the output of the various analyses, transformations,
etc., in llvm/test
3) Testing the execution of compiled small programs by building them and
running them on test hardware (possibly the host, possibly not). These are
in test-suite/SingleSource/UnitTests and
projects/test-suite/SingleSource/Regression
4) Testing the execution of large "real world" programs in the nightly test
suite after building with the Clang/LLVM toolchain. These consist of
everything else in the test-suite/, especially the test-suite/MultiSource
side of things

We don't expect the latter two to be used on every single check-in. We have
build bots that run them, and you should use them whenever changing
something in a way that is risky or may well have failures that only show
up in the last "whole system" testing pattern.

We expect nearly every patch[1] to be covered to some extent either by
unittests of the APIs or by FileCheck-driven output-tests in 'test/...'.
The reason for this split in importance is that these first two don't
require any particular hardware or execution environment, and thus are
suitable to require every developer run and pass on every checkin. This is
what Rafael and I are looking for in these patches, as they will ensure
developers who don't have a PowerPC system to run any execution tests on
don't immediately break that backend. =]

Now, as Rafael is alluding to, there is an unfortunate reality: we don't
have enough tests in the tree to cover the *existing* PowerPC support, much
less to cover your additions. That means you may need to add tests for the
existing PPC support, just to be able to fix a bug in and have a test that
covers that fix . I hope this isn't too much of a burden, as it will
greatly improve the quality of our PPC support.

The common pattern when building up support for a platform is to take some
of the execution tests in #3 and #4 which aren't working correctly, figure
out what isn't working, and what the output of the compiler *should* be in
order for it to work. Then to write a very small test which produces the
wrong output, and use FileCheck within it to assert that the correct output
is instead produced, and include that with the fix as a new test in #1 or
#2. This way, we won't regress even if we don't run the complete test
suites.

If everything currently in #3 and #4 works correctly, but other programs
don't, my suggestion would be to follow essentially the same process with
whatever system-tests you have that are failing. We're hoping to make it
easier in the future to contribute new system tests to #4, but currently
it's a bit painful.


Now, the JIT tests are just really really special. Honestly, I would ignore
them and JIT in general until the normal Clang/LLVM static compilation
pipeline is working well and well tested. Then ask on llvmdev for how best
to test JIT problems if you still have them. =]

Hopefully not too much information overload! =] Let me know if I can
clarify anything.
-Chandler


PS: You really don't need to read the foot notes... ;] I'm long winded.

[1] Yea, "nearly" is kind-of annoying here. If you're curious, here is how
I would break it down further:

There are a few things inside of LLVM that are not in an easily testable
state. For example, the inner details of the register allocator. Some of
these cases we can't add tests for because we have to have a function that
needs N-thousands of virtual registers before the bad code path is taken.
We're actively trying to reduce the amount of LLVM's pipeline that is
subject to this though, so its important to try to find a way to test these
things first.

There are also patches where adding a test is infeasible: for example
patches that make algorithms not be n^2 or exponential. In some cases, the
inputs that trigger these are necessarily too large to include in a
human-readable test suite.

Finally there are lots of patches which just don't need testing. These are
refactorings, cleanups, and other things which don't actually add, remove,
or change functionality, and thus they should already be covered by
existing tests.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20120816/db3b5a78/attachment.html>


More information about the llvm-commits mailing list