[LLVMdev] 2.4 Pre-release (v1) Available for Testing
zaimoni at zaimoni.com
Fri Oct 10 21:27:27 PDT 2008
> On Fri, Oct 10, 2008 at 3:19 PM, Kenneth Boyd <zaimoni at zaimoni.com> wrote:
>> LLVM will be mostly irrelevant on Windows as long as it doesn't have
>> either a native-enough testing framework on Windows, or a build process
>> that chokes on canonical Windows tools. [Specifically, Perl. I get the
>> same errors with unpatched LLVM sources with both ActiveState, and
>> built-from-tarball Perl 5.8.8 . I'd guess the root problem is that
>> canonical Windows builds of Perl use cmd rather than sh for the command
>>> .... Is it really worth the trouble to support things like
>>> testing on systems that almost nobody (wants to) use?.
>> Assuming that the absence of testing is a sufficient cause for not
>> wanting to use LLVM on that system, and the system is otherwise worth
>> supporting: yes.
> I still do not see why all the tests revolve around llvm-gcc.
The *.ll source tests should not revolve around llvm-gcc. The *.c and
*.cpp tests are as problematic as claimed.
There has been some noise about relocating the frontend tests elsewhere,
but I have no idea when that will actually happen.
> are not really testing llvm, they are testing llvm-gcc. We need a
> good testing framework built in c++ itself, that can be built by the
> same compiler that builds llvm itself, which can then be run to do all
> the tests and return a result back, perhaps even with a network
> library like boosts to submit results to the website, it can all be
> done in c++ quite easily and would work everywhere just fine, and yet
> people feel the need to write all this external stuff.
I can see using C++ test drivers as part of LLVM testing, but they
really are a different level of testing than the DejaGNU/expect
I would not be comfortable using just C++ test drivers, but think they
would be useful in enforcing things like IEEE semantics for APFloat,
that are not easily accessible from driver programs.
I've thought about this myself (basically, what to do for platforms that
don't have a reasonable native expect like MingW32), and there aren't
that many good future-compatible choices when you actually need to
examine both stdout and stderr, and can't assume POSIX. My initial
prototyping work on this indicates that it's:
* easy to check for success/failure exit codes (can do this in both
windows batchfile and Bourne shell script) and report last failing test case
* moderately easy to check for success/failure/assert exit codes and
report all failing test cases (Bourne shell script, do not know how to
do this in windows batchfile). (When using msvcrt assert, the exit code
from an assert is 3 so it can be distinguished from a normal
EXIT_FAILURE. I don't know how this works on other platforms.)
* painful to check stderr/stdout. Cf. earlier flamey thread.
Basically, the only future-compatible options other than DejaGNU/expect
I can think of are POSIX (which will not port to MSVC and MingW32),
shell scripting (skeptical about this actually working generally), and
Tcl. If future-compatibilty wasn't a requirement, Python is another
(Yes, I know shell scripting isn't windows-centric. If the alternate
CMake build system stabilizes, having a CTest alternate test driver
suite would make a lot of sense.)
> Yes, we need
> the external stuff to test llvm-gcc, but on platforms where there is
> no llvm-gcc there needs to be a testing framework to fully test llvm
> itself, in all ways, but fully through its api, not something else
> like llvm-gcc.
LLVM testing would benefit from a directly API-exercising testsuite
regardless of whether there is llvm-gcc.
More information about the llvm-dev