[llvm-commits] [zorg] r101742 - in /zorg/trunk/lnt: docs/tests.rst lnt/tests/nt.py

Daniel Dunbar daniel at zuster.org
Sun Apr 18 12:18:51 PDT 2010


Author: ddunbar
Date: Sun Apr 18 14:18:50 2010
New Revision: 101742

URL: http://llvm.org/viewvc/llvm-project?rev=101742&view=rev
Log:
LNT: Implement 'nt' built-in test, this is a replacement for NewNightlyTest.pl.

The usage model differs slightly from NewNightlyTest.pl. While NewNightlyTest.pl
tried to "do everything", this test just focuses on running the LLVM test-suite
in the "nightly" configuration, and collecting and reporting the data. In
particular, it expects that users have already checked out and built the
necessary projects. This makes it more adaptable, for example, to use in
buildbot, or for testing branches, usw.

It is already far more featureful than NewNightlyTest.pl. Some key ones:
 - Support a few well chosen options to allow very fast (<1s) testing of the
   script itself. This is useful for developing the LNT infrastructure. See 'lnt
   runtest nt --help'.

 - Is documented.

 - Doesn't support arbitrary arguments to the test-suite makefiles. Instead, it
   expects everything to be given as explicit arguments so that it can be noted
   and reported properly.

 - Infers and reports various pieces of information about the compiler under test.

 - Supports reporting revision numbers.

 - Has saner defaults -- doesn't run CBE or JIT by default, doesn't blow away
   test results by default. It still needs some automatic test-directory
   rotation features.

This effectively makes the LLVM test-suite Makefiles an "implementation detail",
at least for the purposes of nightly test. This in conjunction with the fast
test turn-around and other LNT testing features should allow me to make much faster
progress on the LNT infrastructure, once everyone is migrated.

Modified:
    zorg/trunk/lnt/docs/tests.rst
    zorg/trunk/lnt/lnt/tests/nt.py

Modified: zorg/trunk/lnt/docs/tests.rst
URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/lnt/docs/tests.rst?rev=101742&r1=101741&r2=101742&view=diff
==============================================================================
--- zorg/trunk/lnt/docs/tests.rst (original)
+++ zorg/trunk/lnt/docs/tests.rst Sun Apr 18 14:18:50 2010
@@ -59,4 +59,90 @@
 Built-in Tests
 --------------
 
-None yet.
+LLVM test-suite (aka LLVM nightly test)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ``nt`` built-in test runs the LLVM test-suite execution and performance
+tests, in the "nightly test" configuration. This test allows running many
+different applications and benchmarks (e.g., SPEC), with various compile
+options, and in several different configurations (for example, using an LLVM
+compiler like ``clang`` or ``llvm-gcc``, running under the LLVM JIT compiler
+using the LLVM ``lli`` bit-code interpreter, or testing new code generator
+passes).
+
+The ``nt`` test requires that the LLVM test-suite repository, a working LLVM
+compiler, and a LLVM source and build tree are available. Currently, the LLVM
+build tree is expected to have been built-in the Release configuration. Unlike
+the prior ``NewNightlyTest.pl``, the ``nt`` tool does not checkout or build any
+thing, it is expected that users manage their own LLVM source and build
+trees. Ideally, each of the components should be based on the same LLVM revision
+(except perhaps the LLVM test-suite), but this is not required.
+
+The test runs the LLVM test-suite builds and execution inside a user specificed
+sandbox directory. By default, each test run will be done in a timestamped
+directory inside the sandbox, and the results left around for post-mortem
+analysis. Currently, the user is responsible for cleaning up these directories
+to manage disk space.
+
+The tests are always expected to be run using out-of-tree builds -- this is a
+more robust model and allow sharing the same source trees across many test
+runs. One current limitation is that the LLVM test-suite repository will not
+function correctly if an in-tree build is done, followed by an out-of-tree
+build. It is very important that the LLVM test-suite repository be left
+pristine.
+
+The following command shows an example of running the ``nt`` test suite on a
+local build::
+
+  $ rm -rf /tmp/BAR
+  $ lnt runtest nt \
+       --sandbox /tmp/BAR \
+       --cc ~/llvm.obj.64/Release/bin/clang \
+       --cxx ~/llvm.obj.64/Release/bin/clang++ \
+       --llvm-src ~/llvm \
+       --llvm-obj ~/llvm.obj.64 \
+       --test-suite ~/llvm-test-suite \
+       TESTER_NAME \
+        -j 16
+  2010-04-17 23:46:40: using nickname: 'TESTER_NAME__clang_DEV__i386'
+  2010-04-17 23:46:40: creating sandbox: '/tmp/BAR'
+  2010-04-17 23:46:40: starting test in '/private/tmp/BAR/test-2010-04-17_23-46-40'
+  2010-04-17 23:46:40: configuring...
+  2010-04-17 23:46:50: testing...
+  2010-04-17 23:51:04: loading test data...
+  2010-04-17 23:51:05: generating report: '/private/tmp/BAR/test-2010-04-17_23-46-40/report.json'
+
+The first seven arguments are all required -- they specify the sandbox path, the
+compilers to test, and the paths to the required sources and builds. The
+``TESTER_NAME`` argument is used to derive the name for this tester (in
+conjunction which some inferred information about the compiler under test). This
+name is used as a short identifier for the test machine; generally it should be
+the hostname of the machine or the name of the person who is responsible for the
+tester. The ``-j 16`` argument is optional, in this case it specifies that tests
+should be run in parallel using up to 16 processes.
+
+In this case, we can see from the output that the test created a new sandbox
+directory, then ran the test in a subdirectory in that sandbox. The test outputs
+a limited about of summary information as testing is in progress. The full
+information can be found in .log files within the test build directory (e.g.,
+``configure.log`` and ``test.log``).
+
+The final test step was to generate a test report inside the test
+directory. This report can now be submitted directly to an LNT server. For
+example, if we have a local server running as described earlier, we can run::
+
+  $ lnt submit --commit=1 http://localhost:8000/submitRun \
+      /tmp/BAR/test-2010-04-17_23-46-40/report.json
+  STATUS: 0
+
+  OUTPUT:
+  IMPORT: /tmp/FOO/lnt_tmp/data-2010-04-17_16-54-35ytpQm_.plist
+    LOAD TIME: 0.34s
+    IMPORT TIME: 5.23s
+  ADDED: 1 machines
+  ADDED: 1 runs
+  ADDED: 1990 tests
+  COMMITTING RESULT: DONE
+  TOTAL IMPORT TIME: 5.57s
+
+and view the results on our local server.

Modified: zorg/trunk/lnt/lnt/tests/nt.py
URL: http://llvm.org/viewvc/llvm-project/zorg/trunk/lnt/lnt/tests/nt.py?rev=101742&r1=101741&r2=101742&view=diff
==============================================================================
--- zorg/trunk/lnt/lnt/tests/nt.py (original)
+++ zorg/trunk/lnt/lnt/tests/nt.py Sun Apr 18 14:18:50 2010
@@ -1,11 +1,528 @@
+import csv
+import os
+import re
+import subprocess
+import sys
+import time
+
+from datetime import datetime
+
+from lnt.testing.util.commands import note, warning, error, fatal
+from lnt.testing.util.commands import capture, which
+
+import lnt.testing.util.compilers
+
+def timestamp():
+    return datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')
+
+def run_test(nick_prefix, opts):
+    # Compute TARGET_FLAGS.
+    target_flags = []
+
+    # FIXME: Eliminate this blanket option.
+    target_flags.extend(opts.cflags)
+
+    if opts.arch is not None:
+        target_flags.append('-arch')
+        target_flags.append(opts.arch)
+    if opts.isysroot is not None:
+        target_flags.append('-isysroot')
+        target_flags.append(opts.isysroot)
+
+    # Compute TARGET_LLCFLAGS.
+    target_llcflags = []
+    if opts.mcpu is not None:
+        target_llcflags.append('-mcpu')
+        target_llcflags.append(opts.mcpu)
+    if opts.relocation_model is not None:
+        target_llcflags.append('-relocation-model')
+        target_llcflags.append(opts.relocation_model)
+
+    # Set the make variables to use.
+    make_variables = {
+        'TARGET_CC' : opts.cc_reference,
+        'TARGET_LLVMGCC' : opts.cc_under_test,
+        'TARGET_FLAGS' : ' '.join(target_flags),
+        'TARGET_LLCFLAGS' : ' '.join(target_llcflags),
+        'ENABLE_OPTIMIZED' : '1',
+        }
+
+    # Set test selection variables.
+    if opts.test_cxx:
+        make_variables['TARGET_CXX'] = opts.cxx_reference
+        make_variables['TARGET_LLVMGXX'] = opts.cxx_under_test
+    else:
+        make_variables['TARGET_CXX'] = 'false'
+        make_variables['TARGET_LLVMGXX'] = 'false'
+        make_variables['DISABLE_CXX'] = '1'
+    if not opts.test_cbe:
+        make_variables['DISABLE_CBE'] = '1'
+    if not opts.test_jit:
+        make_variables['DISABLE_JIT'] = '1'
+    if not opts.test_llc:
+        make_variables['DISABLE_LLC'] = '1'
+    if opts.test_llcbeta:
+        make_variables['ENABLE_LLCBETA'] = '1'
+    if opts.test_small:
+        make_variables['SMALL_PROBLEM_SIZE'] = '1'
+
+    if opts.threads > 1:
+        make_variables['ENABLE_PARALLEL_REPORT'] = '1'
+
+    # Stash the variables we want to report.
+    public_make_variables = make_variables.copy()
+
+    # Set remote execution variables, if used.
+    if opts.remote:
+        make_variables['REMOTE_HOST'] = opts.remote_host
+        make_variables['REMOTE_USER'] = opts.remote_user
+        make_variables['REMOTE_PORT'] = str(opts.remote_port)
+        make_variables['REMOTE_CLIENT'] = opts.remote_client
+
+        # FIXME: This is a huge hack, but we should just eliminate this. It is
+        # only used by a few tests.
+        make_variables['TARGET_ARCH'] = 'ARM'
+
+    # Get compiler info.
+    cc_info = lnt.testing.util.compilers.get_cc_info(opts.cc_under_test,
+                                                     target_flags)
+
+    # Construct the nickname from a few key parameters.
+    cc_nick = '%s_%s' % (cc_info.get('cc_name'), cc_info.get('cc_build'))
+    nick = "%s__%s__%s" % (nick_prefix, cc_nick,
+                           cc_info.get('cc_target').split('-')[0])
+    print >>sys.stderr, "%s: using nickname: %r" % (timestamp(), nick)
+
+    # Set up the sandbox.
+    if not os.path.exists(opts.sandbox_path):
+        print >>sys.stderr, "%s: creating sandbox: %r" % (
+            timestamp(), opts.sandbox_path)
+        os.mkdir(opts.sandbox_path)
+
+    # Create the per-test directory.
+    start_time = timestamp()
+    if opts.timestamp_build:
+        ts = start_time.replace(' ','_').replace(':','-')
+        build_dir_name = "test-%s" % ts
+    else:
+        build_dir_name = "build"
+    basedir = os.path.join(opts.sandbox_path, build_dir_name)
+
+    # Canonicalize paths, in case we are using e.g. an NFS remote mount.
+    #
+    # FIXME: This should be eliminated, along with the realpath call below.
+    basedir = os.path.realpath(basedir)
+
+    if os.path.exists(basedir):
+        needs_clean = True
+    else:
+        needs_clean = False
+        os.mkdir(basedir)
+
+    # Unless not using timestamps, we require the basedir not to exist.
+    if needs_clean and opts.timestamp_build:
+        fatal('refusing to reuse pre-existing build dir %r' % basedir)
+
+    # FIXME: Auto-remove old test directories.
+
+    print >>sys.stderr, '%s: starting test in %r' % (timestamp(), basedir)
+
+    # Configure the test suite.
+    if opts.run_configure or not os.path.exists(os.path.join(
+            basedir, 'Makefile.config')):
+        configure_log_path = os.path.join(basedir, 'configure.log')
+        configure_log = open(configure_log_path, 'w')
+
+        args = [os.path.realpath(os.path.join(opts.test_suite_root,
+                                              'configure')),
+                '--with-llvmsrc=%s' % opts.llvm_src_root,
+                '--with-llvmobj=%s' % opts.llvm_obj_root,
+                '--with-externals=%s' % opts.test_suite_externals]
+        print >>configure_log, '%s: running: %s' % (timestamp(),
+                                                    ' '.join('"%s"' % a
+                                                             for a in args))
+        configure_log.flush()
+
+        print >>sys.stderr, '%s: configuring...' % timestamp()
+        p = subprocess.Popen(args=args, stdin=None, stdout=configure_log,
+                             stderr=subprocess.STDOUT, cwd=basedir)
+        res = p.wait()
+        configure_log.close()
+        if res != 0:
+            fatal('configure failed, log is here: %r' % configure_log_path)
+
+    # Always blow away any existing report.
+    report_path = os.path.join(basedir)
+    if opts.only_test is not None:
+        report_path =  os.path.join(report_path, opts.only_test)
+    report_path = os.path.join(report_path, 'report.nightly.csv')
+    if os.path.exists(report_path):
+        os.remove(report_path)
+
+    # Execute the tests.
+    test_log_path = os.path.join(basedir, 'test.log')
+    test_log = open(test_log_path, 'w')
+
+    args = ['make', '-k', '-j', str(opts.threads),
+            'TEST=nightly', 'report', 'report.nightly.csv']
+    args.extend('%s=%s' % (k,v) for k,v in make_variables.items())
+    if opts.only_test is not None:
+        args.extend(['-C',opts.only_test])
+    print >>test_log, '%s: running: %s' % (timestamp(),
+                                           ' '.join('"%s"' % a
+                                                    for a in args))
+    test_log.flush()
+
+    print >>sys.stderr, '%s: testing...' % timestamp()
+    p = subprocess.Popen(args=args, stdin=None, stdout=test_log,
+                         stderr=subprocess.STDOUT, cwd=basedir)
+    res = p.wait()
+    test_log.close()
+
+    end_time = timestamp()
+
+    # Compute the test samples to report.
+    sample_keys = []
+    sample_keys.append(('gcc.compile', 'GCCAS', 'time'))
+    sample_keys.append(('bc.compile', 'Bytecode', 'size'))
+    if opts.test_llc:
+        sample_keys.append(('llc.compile', 'LLC compile', 'time'))
+    if opts.test_llcbeta:
+        sample_keys.append(('llc-beta.compile', 'LLC-BETA compile', 'time'))
+    if opts.test_jit:
+        sample_keys.append(('jit.compile', 'JIT codegen', 'time'))
+    sample_keys.append(('gcc.exec', 'GCC', 'time'))
+    if opts.test_cbe:
+        sample_keys.append(('cbe.exec', 'CBE', 'time'))
+    if opts.test_llc:
+        sample_keys.append(('llc.exec', 'LLC', 'time'))
+    if opts.test_llcbeta:
+        sample_keys.append(('llc-beta.exec', 'LLC-BETA', 'time'))
+    if opts.test_jit:
+        sample_keys.append(('jit.exec', 'JIT', 'time'))
+
+    # Load the test samples.
+    print >>sys.stderr, '%s: loading test data...' % timestamp()
+    test_samples = []
+
+    # If nightly test went screwy, it won't have produced a report.
+    if not os.path.exists(report_path):
+        fatal('nightly test failed, no report generated')
+
+    report_file = open(report_path, 'rb')
+    reader_it = iter(csv.reader(report_file))
+
+    # Get the header.
+    header = reader_it.next()
+    if header[0] != 'Program':
+        fatal('unexpected report file, missing header')
+
+    # Verify we have the keys we expect.
+    if 'Program' not in header:
+        fatal('missing key %r in report header' % 'Program')
+    for item in sample_keys:
+        if item[1] not in header:
+            fatal('missing key %r in report header' % item[1])
+
+    # We don't use the test info, currently.
+    test_info = {}
+    for row in reader_it:
+        record = dict(zip(header, row))
+
+        program = record['Program']
+        if opts.only_test is not None:
+            program = os.path.join(opts.only_test, program)
+        test_base_name = 'nt.%s' % program.replace('.','_')
+        for name,key,tname in sample_keys:
+            test_name = '%s.%s.%s' % (test_base_name, name, tname)
+            value = record[key]
+            if value == '*':
+                test_samples.append(lnt.testing.TestSamples(
+                        test_name + '.success', [0], test_info))
+            else:
+                test_samples.append(lnt.testing.TestSamples(
+                        test_name, [float(value)], test_info))
+
+    report_file.close()
+
+    # Collect the machine and run info.
+    #
+    # FIXME: Support no-machdep-info.
+    #
+    # FIXME: Import full range of data that the Clang tests are using?
+    machine_info = {}
+    machine_info['uname'] = capture(["uname","-a"],
+                                    include_stderr=True).strip()
+    machine_info['hardware'] = capture(["uname","-m"],
+                                       include_stderr=True).strip()
+    machine_info['os'] = capture(["uname","-sr"], include_stderr=True).strip()
+    machine_info['name'] = capture(["uname","-n"], include_stderr=True).strip()
+    machine_info['gcc_version'] = capture([opts.cc_reference, '--version'],
+                                          include_stderr=True).split('\n')[0]
+    machine = lnt.testing.Machine(nick, machine_info)
+
+    # FIXME: We aren't getting the LLCBETA options.
+    run_info = {}
+    run_info['tag'] = 'nt'
+    run_info.update(cc_info)
+
+    # FIXME: Hack, use better method of getting versions. Ideally, from binaries
+    # so we are more likely to be accurate.
+    run_info['llvm_revision'] = capture([os.path.join(opts.llvm_src_root,
+                                                      'utils',
+                                                      'GetSourceVersion'),
+                                         opts.llvm_src_root],
+                                        include_stderr=True).strip()
+    run_info['test_suite_revision'] = capture([os.path.join(opts.llvm_src_root,
+                                                            'utils',
+                                                            'GetSourceVersion'),
+                                               opts.test_suite_root],
+                                             include_stderr=True).strip()
+    run_info.update(public_make_variables)
+
+    # Set the run order from the user, if given.
+    if opts.run_order is not None:
+        run_info['run_order'] = opts.run_order
+    else:
+        # Otherwise, infer as the most forward revision we found.
+        #
+        # FIXME: Pretty lame, should we just require the user to specify this?
+        if run_info['llvm_revision'].isdigit():
+            run_info['run_order'] = run_info['llvm_revision']
+        if (run_info['cc_src_revision'].isdigit() and
+            ('run_order' not in run_info or
+             int(run_info['run_order'] < int(run_info['cc_src_revision'])))):
+            run_info['run_order'] = run_info['cc_src_revision']
+        if 'run_order' in run_info:
+            run_info['run_order'] = '%7d' % int(run_info['run_order'])
+
+    # Generate the test report.
+    lnt_report_path = os.path.join(basedir, 'report.json')
+    print >>sys.stderr, '%s: generating report: %r' % (timestamp(),
+                                                       lnt_report_path)
+    run = lnt.testing.Run(start_time, end_time, info = run_info)
+
+    report = lnt.testing.Report(machine, run, test_samples)
+    lnt_report_file = open(lnt_report_path, 'w')
+    print >>lnt_report_file,report.render()
+    lnt_report_file.close()
+
+    return report
+
+###
+
 import builtintest
+from optparse import OptionParser, OptionGroup
+
+usage_info = """
+Script for running the tests in LLVM's test-suite repository.
+
+This script expects to run against a particular LLVM source tree, build, and
+compiler. It is only responsible for running the tests in the test-suite
+repository, and formatting the results for submission to an LNT server.
+
+Basic usage:
+
+  %%prog %(name)s \\
+    --sandbox FOO \\
+    --cc ~/llvm.obj.64/Release/bin/clang \\
+    --cxx ~/llvm.obj.64/Release/bin/clang++ \\
+    --llvm-src ~/llvm \\
+    --llvm-obj ~/llvm.obj.64 \\
+    --test-suite ~/llvm-test-suite \\
+    FOO
+
+where --sandbox is the directory to build and store results in, --cc and --cxx
+are the full paths to the compilers to test, and the remaining options are paths
+to the LLVM source tree, LLVM object tree, and test-suite source tree. The final
+argument is the base nickname to use to describe this run in reports.
+
+To do a quick test, you can add something like:
+
+    -j 16 --only-test SingleSource
+
+which will run with 16 threads and only run the tests inside SingleSource.
+
+To do a really quick test, you can further add
+
+    --no-timestamp --no-configure
+
+which will cause the same build directory to be used, and the configure step
+will be skipped if it appears to already have been configured. This is
+effectively an incremental retest. It is useful for testing the scripts or
+nightly test, but it should not be used for submissions."""
 
 class NTTest(builtintest.BuiltinTest):
     def describe(self):
         return 'LLVM test-suite compile and execution tests'
 
     def run_test(self, name, args):
-        raise NotImplementedError
+        parser = OptionParser(
+            ("%%prog %(name)s [options] tester-name\n" + usage_info) % locals())
+
+        group = OptionGroup(parser, "Sandbox Options")
+        group.add_option("-s", "--sandbox", dest="sandbox_path",
+                         help="Parent directory to build and run tests in",
+                         type=str, default=None, metavar="PATH")
+        group.add_option("", "--no-timestamp", dest="timestamp_build",
+                         help="Don't timestamp build directory (for testing)",
+                         action="store_false", default=True)
+        group.add_option("", "--no-configure", dest="run_configure",
+                         help=("Don't run configure if Makefile.config is "
+                               "present (only useful with --no-timestamp)"),
+                         action="store_false", default=True)
+        parser.add_option_group(group)
+
+        group = OptionGroup(parser, "Inputs")
+        group.add_option("", "--llvm-src", dest="llvm_src_root",
+                         help="Path to the LLVM source tree",
+                         type=str, default=None, metavar="PATH")
+        group.add_option("", "--llvm-obj", dest="llvm_obj_root",
+                         help="Path to the LLVM source tree",
+                         type=str, default=None, metavar="PATH")
+        group.add_option("", "--test-suite", dest="test_suite_root",
+                         help="Path to the LLVM test-suite sources",
+                         type=str, default=None, metavar="PATH")
+        group.add_option("", "--test-externals", dest="test_suite_externals",
+                         help="Path to the LLVM test-suite externals",
+                         type=str, default='/dev/null', metavar="PATH")
+        parser.add_option_group(group)
+
+        group = OptionGroup(parser, "Test Compiler")
+        group.add_option("", "--cc", dest="cc_under_test", metavar="CC",
+                         help="Path to the C compiler to test",
+                         type=str, default=None)
+        group.add_option("", "--cxx", dest="cxx_under_test", metavar="CXX",
+                         help="Path to the C++ compiler to test",
+                         type=str, default=None)
+        group.add_option("", "--cc-reference", dest="cc_reference",
+                         help="Path to the reference C compiler",
+                         type=str, default=None)
+        group.add_option("", "--cxx-reference", dest="cxx_reference",
+                         help="Path to the reference C++ compiler",
+                         type=str, default=None)
+        parser.add_option_group(group)
+
+        group = OptionGroup(parser, "Test Options")
+        group.add_option("", "--arch", dest="arch",
+                         help="Set -arch in TARGET_FLAGS [%default]",
+                         type=str, default=None)
+        group.add_option("", "--isysroot", dest="isysroot", metavar="PATH",
+                         help="Set -isysroot in TARGET_FLAGS [%default]",
+                         type=str, default=None)
+
+        group.add_option("", "--mcpu", dest="mcpu",
+                         help="Set -mcpu in TARGET_LLCFLAGS [%default]",
+                         type=str, default=None, metavar="CPU")
+        group.add_option("", "--relocation-model", dest="relocation_model",
+                         help=("Set -relocation-model in TARGET_LLCFLAGS "
+                                "[%default]"),
+                         type="str", default=None, metavar="MODEL")
+
+        group.add_option("", "--cflag", dest="cflags",
+                         help="Additional flags to set in TARGET_FLAGS",
+                         action="append", type=str, default=[], metavar="FLAG")
+        parser.add_option_group(group)
+
+        group = OptionGroup(parser, "Test Selection")
+        group.add_option("", "--disable-cxx", dest="test_cxx",
+                         help="Disable C++ tests",
+                         action="store_false", default=True)
+
+        group.add_option("", "--enable-cbe", dest="test_cbe",
+                         help="Enable CBE tests",
+                         action="store_true", default=False)
+        group.add_option("", "--enable-jit", dest="test_jit",
+                         help="Enable JIT tests",
+                         action="store_true", default=False)
+        group.add_option("", "--disable-llc", dest="test_llc",
+                         help="Disable LLC tests",
+                         action="store_false", default=True)
+        group.add_option("", "--enable-llcbeta", dest="test_llcbeta",
+                         help="Enable LLCBETA tests",
+                         action="store_true", default=False)
+
+        group.add_option("", "--small", dest="test_small",
+                         help="Use smaller test inputs and disable large tests",
+                         action="store_false", default=True)
+
+        group.add_option("", "--only-test", dest="only_test", metavar="PATH",
+                         help="Only run tests under PATH",
+                         type=str, default=None)
+        parser.add_option_group(group)
+
+        group = OptionGroup(parser, "Test Execution")
+        group.add_option("-j", "--threads", dest="threads",
+                         help="Number of testing threads",
+                         type=int, default=1, metavar="N")
+
+        group.add_option("", "--remote", dest="remote",
+                         help=("Execute remotely, see "
+                               "--remote-{host,port,user,client} [%default]"),
+                         action="store_true", default=False)
+        group.add_option("", "--remote-host", dest="remote_host",
+                         help="Set remote execution host [%default]",
+                         type=str, default="localhost", metavar="HOST")
+        group.add_option("", "--remote-port", dest="remote_port",
+                         help="Set remote execution port [%default] ",
+                         type=int, default=None, metavar="PORT",)
+        group.add_option("", "--remote-user", dest="remote_user",
+                         help="Set remote execution user [%default]",
+                         type=str, default=None, metavar="USER",)
+        group.add_option("", "--remote-client", dest="remote_client",
+                         help="Set remote execution client [%default]",
+                         type=str, default="ssh", metavar="RSH",)
+        parser.add_option_group(group)
+
+        group = OptionGroup(parser, "Output Options")
+        parser.add_option("", "--run-order", dest="run_order", metavar="STR",
+                          help="String to use to identify and order this run",
+                          action="store", type=str, default=None)
+        parser.add_option_group(group)
+
+        (opts, args) = parser.parse_args(args)
+        if len(args) != 1:
+            parser.error("invalid number of arguments")
+
+        nick, = args
+
+        # Validate options.
+
+        if opts.sandbox_path is None:
+            parser.error('--sandbox is required')
+
+        # Attempt to infer cc_reference and cxx_reference if not given.
+        if opts.cc_reference is None:
+            opts.cc_reference = which('gcc') or which('cc')
+            if opts.cc_reference is None:
+                parser.error('unable to infer --cc-reference (required)')
+        if opts.test_cxx and opts.cxx_reference is None:
+            opts.cxx_reference = which('g++') or which('c++')
+            if opts.cxx_reference is None:
+                parser.error('unable to infer --cxx-reference (required)')
+
+        if opts.cc_under_test is None:
+            parser.error('--cc is required')
+        if opts.cxx_under_test is None:
+            parser.error('--cxx is required')
+
+        if opts.llvm_src_root is None:
+            parser.error('--llvm-src is required')
+        if opts.llvm_obj_root is None:
+            parser.error('--llvm-obj is required')
+        if opts.test_suite_root is None:
+            parser.error('--test-suite is required')
+
+        if opts.remote:
+            if opts.remote_port is None:
+                parser.error('--remote-port is required with --remote')
+            if opts.remote_user is None:
+                parser.error('--remote-user is required with --remote')
+
+        # FIXME: We need to validate that there is no configured output in the
+        # test-suite directory, that borks things. <rdar://problem/7876418>
+
+        return run_test(nick, opts)
 
 def create_instance():
     return NTTest()





More information about the llvm-commits mailing list