[cfe-dev] [LLVMdev] [Announcement] Call For 3.3 Testers!

Nikola Smiljanic popizdeh at gmail.com
Tue Apr 30 00:33:45 PDT 2013


Not that I know of but it's definitely something we could use.

The basic idea is to use the test-release.sh script that can be found
in llvm source tree under utils/release.
$ test-release.sh --help
usage: test-release.sh -release X.Y -rc NUM [OPTIONS]

 -release X.Y      The release number to test.
 -rc NUM           The pre-release candidate number.
 -j NUM            Number of compile jobs to run. [default: 3]
 -build-dir DIR    Directory to perform testing in. [default: pwd]
 -no-checkout      Don't checkout the sources from SVN.
 -no-64bit         Don't test the 64-bit version. [default: yes]
 -enable-ada       Build Ada. [default: disable]
 -enable-fortran   Enable Fortran build. [default: disable]
 -disable-objc     Disable ObjC build. [default: enable]
 -test-debug       Test the debug build. [default: no]
 -test-asserts     Test with asserts on. [default: no]

So you would run this to get the baseline (building the 3.2 final release)
test-release.sh -release 3.2 -final
For 3.3 release you'd probably use -test-debug and -test-asserts as well
The script will run basic tests and this is something to check for
failures, it will also compare outputs of Phase2 and Phase3 and report
differences (we've been seeing these in previous releases)

The next thing would be to run the test-suite for both releases and
use the findRegressions.py (also in utils/release) to compare the
output of 3.2 final and the freshly built 3.3
To run the test-suite you run something like this inside
llvm/projects/llvm-test directory
make -k LLVMCC_OPTION=clang ENABLE_BUILT_CLANG=1
ENABLE_PARALLEL_REPORT=1 TEST=simple report >
../../../simple-report.txt 2>&1

You report any issues you run into, file bugzilla reports or post to
mailing list if you're unsure.

This is just to give you some feel for what it actually takes, but the
process is relatively simple.

On Tue, Apr 30, 2013 at 2:41 PM, Mehmet Erol Sanliturk
<m.e.sanliturk at gmail.com> wrote:
>
>
>
> On Mon, Apr 29, 2013 at 1:12 PM, Bill Wendling <wendling at apple.com> wrote:
>>
>> Hear ye! Hear ye! This is a call for testers for the 3.3 release!!!
>>
>> What's Expected
>> ---------------
>>
>> You might be asking yourself, "Self, I would like to be an LLVM tester for
>> the 3.3 release, but I don't know what's involved in being one." Well, ask
>> yourself no more! Not only do I have the answers for you, but talking to
>> yourself will cause people to avoid you.
>>
>> Here's a short list of things you're expected to do:
>>
>> 1) You will maintain a machine for the duration of testing. Updating the
>> OS or tools would add extra variables to the equation that may delay
>> testing. So we expect your machine to be stable the whole time.
>>
>> 2) You will compile the previous release (3.2) and then run the full test
>> suite for that release. This is your baseline for future testing.
>>
>> 3) When the newest release candidate is announced, you'll download the
>> release candidate's sources, compile them, and run the regression tests.
>>
>> 4) Assuming that the regression tests passed, you will then run the full
>> test suite.
>>
>> 5) You will compare the results from (4) with the full test suite results
>> from the last release (the baseline you generated in (2)).
>>
>> 6) If there are no regressions in (5), then you will package up a tar-ball
>> of the binaries you generated and send them to me. I'll post those binaries
>> on the website so that others may test with them.
>>
>> 7) Most importantly: You are expected to file PR reports for *any* issues
>> you run into.
>>
>> 8) And, of course, put up with me pestering you to get things done. :-)
>>
>> We have scripts to compile and run the tests for the release candidates.
>> You're expected to complete your testing fairly quickly so that we can get
>> the binaries out to the community for further testing. ("Fairly quickly"
>> here is intentionally vague. Some machines aren't fast, or people don't have
>> time for an immediate response. But we would like to have binaries before
>> the week is up. :-) )
>>
>> We plan for two iterations of the above type of testing, with a week in
>> between them to allow for bug fixing. If we have show stoppers after the
>> second round, we will need to add a third round of testing, but we strive to
>> avoid that as much as possible. The whole process takes roughly a month to
>> do.
>>
>> Here are some of the platforms we currently support:
>>
>> * Mac OS X (64-bit)
>> * FreeBSD (32- and 64-bit)
>> * Linux (32- and 64-bit)
>> * ARM
>> * Windows (experimental)
>>
>> If you are interested in being a testers, please send me an email and let
>> me know which platform you'd like to test!!
>>
>> Share and enjoy!
>> -bw
>>
>
>
>
> Is there a web page to describe the above testing phases in detail in an
> algorithmic way ,
> such as :
>
>
> (A) :
>
> (1) Download ...x...
> (2) Extract ...x...
> (3) Apply script ...y...
> (4) If output is ...
>      then ... Do the following : ...
>      else ... Do the following : ...
> .
> .
> .
>
> (B) :
> .
> .
> .
>
>
> Such a detailed algorithmic description may allow more ( less experienced
> people )
> to participate in testing .
>
>
> Thank you very much .
>
> Mehmet Erol Sanliturk
>
>
>
>
>
>
>
>
> _______________________________________________
> cfe-dev mailing list
> cfe-dev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev
>



More information about the cfe-dev mailing list