[llvm-dev] RFC: Adding GCC C Torture Suite to External Test Suites
Sam Elliott via llvm-dev
llvm-dev at lists.llvm.org
Wed Oct 9 05:11:50 PDT 2019
Thanks to the help of Kristof, Hal, Alex, Lewis Revill, and Henry Cheang, the GCC C Torture suite is now part of the llvm test suite.
In the end, we added the source files to the repo rather than having it as an external test suite.
The relevant commits are:
- https://reviews.llvm.org/rT374155 (Adding the CMake Configuration and licences).
- https://reviews.llvm.org/rT374156 (Adding the Test Case source files).
There are two things to note:
1. Currently we only enable the gcc c torture suite for x86 and riscv architectures. This is to avoid breakage on other architectures, as we’re using blacklists to exclude tests.
2. There is only a CMake configuration for the gcc c torture suite. I have not added Makefile support.
We would welcome the addition of support for other architectures. I was not able to do this as I do not have a testing setup for more than the two architectures above.
> On 4 Sep 2019, at 7:08 pm, Sam Elliott <selliott at lowrisc.org> wrote:
> Given the replies so far have been tending towards importing rather than having the test suite be external, I have updated the patch, here: https://reviews.llvm.org/D66887
> I would welcome reviewers, noting that the patch is huge, and really the important parts for this review are the build scripts.
> I have not added Makefile support for these tests. I still don’t know the status of the Makefile build system - whether it’s staying around permanently, or whether it’s due to be removed like it was from the main llvm repos.
>> On 3 Sep 2019, at 8:21 pm, Alex Bradbury <asb at lowrisc.org> wrote:
>> On Tue, 3 Sep 2019 at 18:37, Kristof Beyls <kristof.beyls at gmail.com> wrote:
>>> Op di 3 sep. 2019 om 18:36 schreef Finkel, Hal J. via llvm-dev <llvm-dev at lists.llvm.org>:
>>>> On 9/3/19 7:19 AM, Sam Elliott wrote:
>>>>> There are 1500 tests total, and about 100 on the platform-agnostic blacklist. Alex and I do not think this is an onerous burden for maintenance, either as an external test suite or if the test suite is imported.
>>>>> In the long term, if we import the tests, we know we will have to do updates when the Embecosm work lands, and beyond that updates can be more sporadic. It’s not clear to me how much harder these updates will be than if the test suite remains external.
>>>>> We would welcome more views as to whether this suite should be imported or should be an external test suite.
>>>> I lean toward importing - I suspect we'll get better coverage on
>>>> buildbots, and just in general more people will end up using the tests,
>>>> than if it is external. I'm also curious what other people think.
>>> I also thought that importing the tests will result in them being run far more regularly.
>>> I wonder in how far regressions happen in these tests after a backend has been brought up with it. I.e. once all these tests are made to pass, do they later still capture regressions from time to time, or do they pretty much always keep on passing after?
>>> If they do catch regressions later on, the value of running them more frequently (e.g. on buildbots) goes up quite a bit.
>>> Maybe the only reason I could think of to not import them is if they would take a long time to run - making buildbots slower. Is there any data on how long it takes to run these tests?
>> They're fast to build+run. Compiling and executing all ~1400
>> non-masked tests on an i9-9900k takes less than a minute (~40s):
>> * That's using a release build of Clang (will be slower for Debug+Asserts)
>> * Compiling tests with O2 targeting RISC-V
>> * Running tests using qemu-user
>> I typically run a whole bunch of ISA variant + opt level + ABI
>> variants. You'd obviously hope that regressions don't happen, but it's
>> a useful sanity check to complement the in-tree unit tests. It's
>> helped me catch things when reviewing patches from others and
>> developing my own.
>> I also tend to generate .s, check them into a git repo and use git
>> diff to check for unexpected changes in output.
>> My general feeling is that if a regression or unexpected code change
>> (e.g. code size regression) can be found in one of the torture suite
>> tests then it's great news (vs seeing tthe same problem in a larger
>> program) - you've already got a fairly small and easy to understand
>> input so it's usually pretty easy to minimise and get to the root of
>> the problem.
> Sam Elliott
> Software Developer - LLVM
> lowRISC CIC
> selliott at lowrisc.org
Software Developer - LLVM
More information about the llvm-dev