[llvm-dev] [cfe-dev] Testing Best Practices/Goals (in the context of compiler-rt)
Xinliang David Li via llvm-dev
llvm-dev at lists.llvm.org
Tue Mar 1 12:10:47 PST 2016
On Tue, Mar 1, 2016 at 11:10 AM, David Blaikie via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
>
>
> On Tue, Mar 1, 2016 at 8:19 AM, Robinson, Paul <
> Paul_Robinson at playstation.sony.com> wrote:
>
>> (I'm clearly not receiving everything on this thread, despite checking
>> both corporate and personal junk filters; and likely I'm repeating some
>> things others have said; but I've been wanting to chime in for a while and
>> made time for it this morning.)
>>
>>
>>
>> I think it's not so much that compiler-rt (or libcxx) people want
>> end-to-end tests specifically, so much as the nature of the environment
>> requires things that look like end-to-end tests.
>>
>
> While I realize reading the whole thread is impractical/not very valuable,
> this might be where you're already lacking enough context. That's really
> the discussion: David is pretty adamant about /wanting/ end-to-end tests
> (as are Alexey and Anna, so far as I can tell). Not just using them as a
> means to an end.
>
> And I am not objecting to end-to-end tests at all -
>
Good to hear that :)
> I realize they're the easiest way to exercise the contents of compiler-rt.
> We have this discussion and approach even in parts of LLVM, or lld for
> example, using llvm-mc to assemble example objects rather than checking in
> binary objects to test various tools that need objects. (in some cases we
> checkin objects, in some cases we use the other approach) - it's
> sufficiently open to debate that I'm not disagreeing with that, but I think
> it's healthy to discuss/weigh the benefits there on a regular basis. That's
> not what I'm getting at here.
>
> I think the example here:
> http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160208/330759.html
> is a good one. Clang produces instrumented functions with counters for
> profiling. We already have some functions that have profiling, some
> functions that don't. We already have compiler-rt tests to demonstrate that
> it can handle functions with and without profiling counters. We change
> Clang to remove counters from some functions (implicit special members).
> Why would we add a new test to compiler-rt to test that it can handle not
> having counters on these particular functions? This isn't a new/novel
> codepath in compiler-rt, it's just anotehr function without a counter.
>
>
It tests many different small pieces of the whole pipeline and makes sure
it works
1) ctors are not skipped by instrumenter and the body is properly
instrumented
2) profile data for special functions are properly lowered and emitted and
not discarded in downstream pipelines
3) coverage map data is properly generally for such special functions
4) the coverage reader and annotator can handle such cases and annotate the
source properly
Yes, it might overlap with existing llvm/clang tests, but paying the tiny
cost of adding a representative test for a special function has its value
and outweighs its downside. Other points:
1) Given the small size the compile-rt test suite, we are attacking the
wrong (non-existing) problem here
2) Even it ever became a problem (too many tests) in the future, there are
other ways to solve it (split it for instance)
3) I don't want contributors feel intimidated when adding reliable test
cases
that is my position-- but don't call me adamant -- I just think this is
non-problem.
thanks,
David
>>
>> Clang and LLVM are libraries with a ream of little driver programs that
>> give the test-writer a lot of control over the fiddly bits that they want
>> to test. AIUI the driver programs evolved specifically to allow
>> test-writers to have a lot of control over the fiddly bits that they want
>> to test. This makes it easy to write a test that exercises your 6-line
>> patch and as little else as possible.
>>
>>
>>
>> As an editorial aside, this encourages people to write tiny tests that
>> exercise their 6-line patches and as little else as possible, meaning that
>> the Clang and LLVM "lit" tests are a crazy quilt of teeny tests that
>> individually exercise as little as possible. While they might add up to a
>> respectable degree of coverage all by themselves, in the grand scheme of
>> things they are "smoke tests" in the sense that mainly they tell you
>> whether a given revision is worth putting through your more expansive,
>> expensive and thorough (and end-to-end) testing. The "lit" tests really
>> don't break that often, given the velocity and size of the overall project,
>> but they can't be (were never intended to be, AFAICT) more than a starting
>> point for real QA.
>>
>>
>>
>> Getting back to the main topic: Compiler-rt and libcxx are of course
>> also inherently designed as libraries, however they don't seem to have a
>> comparable set of little driver programs that give test-writers a lot of
>> control over the fiddly bits that they want to test. Instead they have to
>> rely on tools like Clang and system linkers and whatnot to construct tests
>> with fewer degrees of freedom than Clang and LLVM test-writers are
>> accustomed to. These give the APPEARANCE of being end-to-end tests, from
>> the perspective of Clang/LLVM test writers who are used to really fine
>> control over the fiddly bits that they want to test. But they aren't
>> necessarily end-to-end tests, in intent; they are just the kinds of tests
>> that these projects are able to construct, given the tools at hand.
>> (Looking in from outside, anyway.)
>>
>>
>>
>> End-to-end tests written to user-facing specs are invaluable, and no
>> responsible vendor would try to deliver a toolchain without them. The
>>
>> overall Clang/LLVM project provides comparatively little of this, which
>> is not hugely surprising really given that (a) the community comprises
>>
>> essentially developers not QA people, and (b) end-to-end tests carry a
>> lot of baggage in terms of dependencies on environmental factors beyond
>>
>> the control of the test system. End-to-end tests are, consequently,
>> mostly up to the vendor (currently).
>>
>>
>>
>> Not to say it couldn't happen. Libcxx has a test suite that does a
>> pretty reasonable job of providing mostly environment-neutral tests for
>>
>> the libcxx implementation, a lot of which is not so hard to adapt to
>> other implementations and cross-execution environments. But it's not
>>
>> something that day-to-day Clang/LLVM developers are expected to run.
>> Probably compiler-rt should more intentionally evolve in that direction.
>>
>>
>>
>> Thanks for listening,
>>
>> --paulr
>>
>>
>>
>> *From:* cfe-dev [mailto:cfe-dev-bounces at lists.llvm.org] *On Behalf Of *Xinliang
>> David Li via cfe-dev
>> *Sent:* Sunday, February 28, 2016 10:33 AM
>> *To:* David Blaikie
>> *Cc:* llvm-dev; David Blaikie via cfe-dev
>> *Subject:* Re: [cfe-dev] [llvm-dev] Testing Best Practices/Goals (in the
>> context of compiler-rt)
>>
>>
>>
>>
>>
>>
>>
>> On Sun, Feb 28, 2016 at 7:46 AM, David Blaikie <dblaikie at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 4:59 PM, Xinliang David Li <xinliangli at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 4:28 PM, David Blaikie via llvm-dev <
>> llvm-dev at lists.llvm.org> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 3:17 PM, Xinliang David Li <davidxl at google.com>
>> wrote:
>>
>> Sean and Alexey have said a lot on this topic. Here is my version of
>> explanation that LLVM testing is not suitable to replace end to end testing.
>>
>>
>>
>> - The most important reason being that LLVM tests tend to test
>> 'implementation details'. This has many drawbacks:
>>
>> a) by only writing test cases like this, it is hard to expose bugs
>> inherited in the implementation itself;
>>
>> b) the connection between the implementation and end-user expectation
>> is not obvious
>>
>> c) often in times, we will have to throw out the existing
>> implementation for a whole new more efficient implementation -- not only do
>> we need to pay the cost of tedious test case update (from one impl to
>> another) -- but that all the previous implementation specific code coverage
>> become useless -- how are we sure the new implementation does not break the
>> end-end user expectation?
>>
>>
>> Yep, there are certainly tradeoffs between unit/feature/integration
>> testing.
>>
>>
>>
>>
>> End-end tests on the other hand have no such drawbacks -- its
>> specification is well defined:
>>
>> 1) it is easy to write test cases based on well defined specifications
>> and cover all corner cases (e.g, new language constructs etc);
>>
>>
>>
>> I'm not sure that integration/end-to-end testing is more likely to
>> exercise corner cases.
>>
>>
>>
>> I actually meant various cases covering by public specs. Test writers do
>> not need to assume internal path coverage (e.g, assuming this construct is
>> emitted in the same code path as the other construct, so let's skip the
>> testing of one of them and save some test time) -- the assumption can be
>> harmful whenever implementation changes.
>>
>>
>>
>> It seems to me we always make assumptions about internal implementation
>> details with any testing and/or our testing matrix becomes huge (for
>> example we don't test that every optimization preserves/doesn't break
>> profile counters - many optimizations and parts of code are completely
>> orthogonal to other properties of the code - and we certainly don't do that
>> with end-to-end tests)
>>
>> In any case, this is a pretty consistent philosophy of targeted testing
>> across the LLVM project. compiler-rt seems to be an outlier here and I'm
>> trying to understand why it should be so.
>>
>> I don't disagree that some amount of to end-to-end testing can be
>> beneficial, though I perhaps disagree on how much - but mostly it has been
>> a pretty core attitude of the LLVM project across subprojects that they
>> test only themselves as much as possible in their "check"-style regression
>> suite.
>>
>>
>>
>>
>>
>>
>>
>> 2) it is easy to extract test cases from real failures from very large
>> apps
>>
>>
>>
>> We seem to have been pretty good, as a community, at producing the narrow
>> feature/unit tests for breakages from very large applications/scenarios. I
>> wouldn't be happy about not having targeted tests because we had broad
>> coverage from some large test - so I don't think this removes the need to
>> reduce and isolate test cases.
>>
>>
>>
>> If the runtime test case is not in form that is self contained and can be
>> executed, it has probably been reduced too much to a form that the original
>> problem can no longer be recognized.
>>
>>
>>
>> Presumably the same is true of the many LLVM IR-level regression tests we
>> add (& even Clang regression tests that test Clang's LLVM IR output, but
>> don't test the ultimate product of that IR and the behavior of that
>> product), no? This is a pretty core choice the LLVM project has made, to
>> test in a fairly narrow/isolated way.
>>
>>
>>
>> Cutting down small reproducible (that is runnable) usually takes huge
>> amount of effort and it is not something we should throw away easily.
>>
>>
>>
>> I don't think we're throwing it away when we keep the isolated regression
>> tests (such as a Clang frontend test, etc) - that I see as the product of
>> such reduction.
>>
>> In any case, some amount of end-to-end testing doesn't seem bad (we have
>> the test-suite for a reason, it does lots of end-to-end testing, and should
>> probably do lots more), it just seems out of place to me - based on the
>> philosophy/approach of the LLVM project historically.
>>
>>
>>
>>
>>
>> your concern is really about where those test should be or whether those
>> end-end testing should be run by default ?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 3) once the test case is written, it almost never needs to be updated
>> as long as the public interfaces (options) are used.
>>
>>
>>
>> End-end testing also cover areas not in compiler proper -- like runtime,
>> linker, other tools, and their subtle interactions.
>>
>>
>>
>> This is actually a drawback of integration testing - we don't actually
>> want to test those products. And we don't want to make our testing
>> dependent on them, generally - it makes the tests more expensive in a
>> number of ways (more work to setup, costlier to run, more likely to break
>> for non-product reasons, more work to debug because they involve more
>> moving parts). This is one of the tradeoffs of broader testing strategies.
>>
>>
>>
>> So you think it is ok to make assumption that everything will just work
>> and release it to the customer?
>>
>>
>>
>> For instance latest darwin i386 profile-rt bug exposed by the improved
>> profile runtime test coverage will be hard to expose via LLVM only testing
>> ..
>>
>>
>>
>> I'm not familiar with the bug - could you provide more detail?
>>
>>
>>
>> The runtime assumes the Stop symbol (for a given section) is well aligned
>> and point to the end of padding, but i386 darwin linker does not leading to
>> garbage been written out.
>>
>>
>>
>> OK - sounds like a bug in the Darwin linker, yes? Why would it be
>> compelling for us to test for bugs in external tools? Would we expect the
>> people working on a linker to test for bugs in Clang?
>>
>> But, yes, obviously people care about the overall ecosystem and need to
>> test that it all ultimately works together - generally, the test-suite has
>> been where that happens. (again, not something I'm saying is the
>> necessary/only place that should happen, but historically, that's been the
>> pretty hard line between 'check' target tests and end-to-end execution
>> tests)
>>
>>
>>
>>
>>
>>
>>
>> In short, I think runtime tests do not overlap with LLVM testing -- even
>> though they look like testing the same thing. For reference, we currently
>> have < 100 profile rt end-end testing which is alarmingly small. We should
>> push increasing its coverage.
>>
>>
>>
>> While I don't fundamentally have a problem with different kinds of
>> testing (Up to feature/integration testing, etc), I really don't want that
>> testing to become what we rely on for correctness,
>>
>>
>>
>> As I said, there are scenarios we will unfortunately have to rely on it.
>> For instance, we completely change the way how instrumentation is done
>> (from compiler to runtime) and all existing unittests are nullified --
>> though that is not expected to be run every day by every other developers,
>> the coverage in this space needs to be there.
>>
>>
>>
>> Then we rewrite those tests - LLVM's IR changes regularly, and that's
>> what we do - we don't throw all the tests out. (unless the feature changes
>> to be unrecognizable, then we work up new tests - such as when the new
>> Windows exception handling representation was added, I assume)
>>
>>
>>
>> you missed the point here. Such rewrite does not guarantee to give you
>> the right coverage. Many LLVM IR testing is doing checks like: "I am
>> expecting this IR pattern, upstream producers don't break it!". The
>> connection and check of the end results (what user sees) from the IR
>> pattern is totally absent. When the test is rewritten, you can not simply
>> write the newly expected pattern -- they need to be verified to be
>> equivalent (producing the same end results) -- doing pure reasoning is not
>> enough.
>>
>>
>>
>>
>>
>>
>> "not expected to be run every day by every other developer" - this is the
>> point I'm trying to make. Compiler-rt should not be special and the holder
>> of "all the end-to-end tests". Its "check" target should work like the rest
>> of LLVM and be for targeted tests on compiler-rt. If we want intentional
>> (rather than as a convenient method of access, as talked about earlier)
>> end-to-end testing, it should be separated (either at least into a separate
>> target or possibly a separate repo, like the test-suite)
>>
>>
>>
>> you think running 'check-all' should not invoke those tests? Having a new
>> check target works fine too.
>>
>>
>>
>>
>>
>>
>>
>> nor what we expect developers (or even most buildbots) to run for
>> correctness. They are slower and problematic in different ways that would
>> slow down our velocity.
>>
>>
>>
>> That is true and totally fine.
>>
>>
>>
>>
>> That sort of coverage should still be carefully chosen for when it adds
>> value, not too scattershot/broad (we have the test-suite for really broad
>> "does this work with the real world" sort of stuff - but anything below
>> that should still be fairly isolated/targeted, I think).
>>
>>
>>
>> For the runtime feature owners and developers, those tests are a must --
>> not that the burden of running them should be on every other developer.
>>
>>
>>
>> Why is the runtime unique/different in this regard - all the arguments
>> you've been making sound like they would apply equally to clang, for
>> example? & that Clang should have end-to-end, execution tests? Yet this is
>> something the LLVM project has pushed back /very/ hard on and tried very
>> hard to not do and to remove when it comes up. So I'm trying to understand
>> why this case is so different.
>>
>>
>>
>> I feel we are running in circles here. The main point is that we should
>> have tests that test things against public specifications (e.g, language
>> specs, C++ ABIs, etc) which are invariants. If there is a way to test those
>> invariants without resorting to end-to-end, it is the best. For instance,
>> both LLVM IR and C++ ABI are well specified, clang tests can choose to test
>> against those.
>>
>>
>>
>> Many runtime features on the other hand only have well defined user level
>> specifications. The implementation can and will often change . Having tests
>> that only test against internal internal implementations is not enough thus
>> end-end testing is needed.
>>
>>
>>
>> Hope this is clearer,
>>
>>
>>
>> thanks,
>>
>>
>>
>> David
>>
>>
>>
>>
>>
>> - David
>>
>>
>>
>>
>>
>> David
>>
>>
>>
>>
>>
>>
>>
>> - David
>>
>>
>>
>>
>>
>> thanks,
>>
>>
>>
>> David
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 2:15 PM, David Blaikie <dblaikie at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 2:10 PM, Alexey Samsonov <vonosmas at gmail.com>
>> wrote:
>>
>>
>>
>> On Fri, Feb 26, 2016 at 1:34 PM, David Blaikie <dblaikie at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 1:31 PM, Sean Silva <chisophugis at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 1:11 PM, David Blaikie <dblaikie at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 26, 2016 at 1:07 PM, Sean Silva <chisophugis at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Wed, Feb 17, 2016 at 8:45 AM, David Blaikie via cfe-dev <
>> cfe-dev at lists.llvm.org> wrote:
>>
>>
>>
>>
>>
>> On Fri, Feb 12, 2016 at 5:43 PM, Alexey Samsonov <vonosmas at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Thu, Feb 11, 2016 at 1:50 PM, David Blaikie <dblaikie at gmail.com>
>> wrote:
>>
>>
>>
>>
>>
>> On Wed, Feb 10, 2016 at 3:55 PM, Alexey Samsonov via cfe-dev <
>> cfe-dev at lists.llvm.org> wrote:
>>
>> I mostly agree with what Richard and Justin said. Adding a few notes
>> about the general strategy we use:
>>
>>
>>
>> (1) lit tests which look "end-to-end" proved to be way more convenient
>> for testing runtime libraries than unit tests.
>>
>> We do have
>>
>> the latter, and use them to provide test coverage for utility functions,
>> but we quite often accompany fix to the runtime library with
>>
>> "end-to-end" small reproducer extracted from the real-world code that
>> exposed the issue.
>>
>> Incidentally, this tests a whole lot of other functionality: Clang
>> driver, frontend, LLVM passes, etc, but it's not the intent of the test.
>>
>>
>>
>> Indeed - this is analogous to the tests for, say, LLD that use llvm-mc to
>> produce the inputs rather than checking in object files. That area is open
>> to some discussion as to just how many tools we should rope in/how isolated
>> we should make tests (eg: maybe building the json object file format was
>> going too far towards isolation? Not clear - opinions differ). But the
>> point of the test is to test the compiler-rt functionality that was
>> added/removed/modified.
>>
>> I think most people are in agreement with that, while acknowledging the
>> fuzzy line about how isolated we might be.
>>
>>
>>
>> Yes.
>>
>>
>>
>>
>>
>> These tests are sometimes platform-specific and poorly portable, but they
>> are more reliable (we make the same steps as the
>>
>> user of the compiler), and serve the purpose of documentation.
>>
>>
>>
>> (2) If we change LLVM instrumentation - we add a test to LLVM. If we
>> change Clang code generation or driver behavior - we add
>>
>> a test to Clang. No excuses here.
>>
>>
>>
>> (3) Sometimes we still add a compiler-rt test for the change in LLVM or
>> Clang: e.g. if we enhance Clang frontend to teach UBSan
>>
>> to detecting yet another kind of overflow, it makes sense to add a test
>> to UBSan test-suite that demonstrates it, in addition to
>>
>> Clang test verifying that we emit a call to UBSan runtime. Also,
>> compiler-rt test would allow us to verify that the actual error report
>>
>> we present to the user is sane.
>>
>>
>>
>> This bit ^ is a bit unclear to me. If there was no change to the UBSan
>> runtime, and the code generated by Clang is equivalent/similar to an
>> existing use of the UBSan runtime - what is it that the new compiler-rt
>> test is providing? (perhaps you could give a concrete example you had in
>> mind to look at?)
>>
>>
>>
>> See r235568 (change to Clang) followed by r235569 (change to compiler-rt
>> test). Now, it's a cheat because I'm fixing test, not adding it. However, I
>> would have definitely added it, if it was missing.
>>
>>
>>
>> Right, I think the difference here is "if it was missing" - the test case
>> itself seems like it could be a reasonable one (are there other tests of
>> the same compiler-rt functionality? (I assume the compiler-rt functionality
>> is the implementation of sadd/ssub?))
>>
>>
>>
>> In this case, a change to Clang
>>
>> instrumentation (arguments passed to UBSan runtime callbacks) improved
>> the user-facing part of the tool, and compiler-rt test suite is a good
>> place to verify that.
>>
>>
>>
>> This seems like the problematic part - changes to LLVM improve the
>> user-facing part of Clang, but we don't add end-to-end tests of that, as a
>> general rule. I'm trying to understand why the difference between that and
>> compiler-rt
>>
>>
>>
>> In what way do changes in LLVM change the user-facing part of Clang?
>>
>> It obviously depends on how broadly one defines user-facing. Is a 1%
>> performance improvement from a particular optimization user-facing? Is
>> better debug info accuracy user-facing? I'm not sure. But it seems clear
>> that "the user sees a diagnostic or not" definitely is.
>>
>>
>>
>> There's more than just performance in LLVM - ABI features, and yes, I'd
>> argue some pieces of debug info are pretty user facing (as are some
>> optimizations). We also have the remarks system in place now. (also the
>> compiler crashing (or not) is pretty user facing).
>>
>>
>>
>> I'd argue that we probably should have some sort of integration tests for
>> ABI features. I think at the moment we're getting by thanks to self-hosting
>> and regularly building lots of real-world programs with ToT-ish compilers.
>>
>>
>>
>> Perhaps so, but I'd argue that they shouldn't be run as part of "make
>> check" & should be in a separate test grouping (probably mostly run by
>> buildbots) for the purpose of integration testing.
>>
>>
>>
>> If you have llvm/clang/compiler-rt/libc++/libc++abi checkout, they are
>> not run as a part of "make check", only "make check-all", which kind of
>> makes sense (run *all* the tests!). You're free to run "make check-clang",
>> "make check-asan" etc.
>>
>> if you're sure your changes are limited in scope. Just to be clear - do
>> you suggest that compiler-rt tests are too heavy for this configuration,
>> and want to introduce extra level - i.e. extract "make check-compiler-rt"
>> out of "make check-all", and introduce "make check-absolutely-everything",
>> that would encompass them?
>>
>>
>>
>> Fair point, check-all would be/is check all the features, but, yes,
>> potentially pulling out a separate subgroup for integration testing of any
>> kind. Yes, I realize this has the "-Wall" problem. But I suppose what I'm
>> getting at is "check" in the LLVM project parlance is, with the exception
>> of compiler-rt, the "lightweight, immediate verification tests, not
>> integration testing" & I would rather like that to be the case across the
>> board.
>>
>> Perhaps using a different verb for cross-project testing (I don't object
>> to a catch-all target for running all the different kinds of testing we
>> have). Naming is hard.
>>
>>
>>
>>
>>
>> We've made a pretty conscious, deliberate, and consistent effort to not
>> do integration testing across the LLVM projects in "make check"-like
>> testing, and to fix it where we do find it. It seems to me that compiler-rt
>> diverged from that approach and I'm not really in favor of that divergence.
>>
>>
>>
>> I don't see why consistency by itself is a good thing.
>>
>>
>>
>> Makes it easier to work between projects. Sets expectations for
>> developers when they work in one area or another.
>>
>>
>>
>> As a sanitizer developer, current situation is convenient for me, but if
>> it harms / slows down / complicates workflow for other developers or LLVM
>> as a whole - sure, let's fix it.
>>
>>
>> One of the things I'm trying to understand is what the benefit of this
>> extra testing is - to take the add/sub example again, is adding extra
>> coverage for printing different text through the same mechanism in
>> compiler-rt valuable? What sort of regressions/bugs is that catching that
>> makes it compelling to author, maintain, and incur the ongoing execution
>> cost of?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> - David
>>
>>
>>
>>
>>
>> -- Sean Silva
>>
>>
>>
>>
>>
>> Also, I think part of this is that in compiler-rt there are usually more
>> moving parts we don't control. E.g. it isn't just the interface between
>> LLVM and clang. The information needs to pass through archivers, linkers,
>> runtime loaders, etc. that all may have issues that affect whether the user
>> sees the final result. In general the interface between LLVM and clang has
>> no middlemen so there really isn't anything to check.
>>
>>
>>
>> Correctness/behavior of the compiler depends on those things too
>> (linkers, loaders, etc) to produce the final working product the user
>> requested. If we emitted symbols with the wrong linkage we could produce
>> linker errors, drop important entities, etc. But we don't generally test
>> that the output of LLVM/Clang produces the right binary when linked, we
>> test that it produces the right linkages on the resulting entities.
>>
>> - David
>>
>>
>>
>>
>>
>> -- Sean Silva
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> You may argue that Clang test would have been enough (I disagree with
>> that), or that it qualifies as "adding coverage" (maybe).
>>
>>
>>
>>
>>
>> (4) True, we're intimidated by test-suite :) I feel that current use of
>> compiler-rt test suite to check compiler-rt libs better follows
>>
>> the doctrine described by David.
>>
>>
>>
>> Which David? ;) (I guess David Li, not me)
>>
>>
>>
>> Nope, paragraph 2 from your original email.
>>
>>
>>
>> I think maybe what could be worth doing would be separating out the
>> broader/intentionally "end to end" sort of tests from the ones intended to
>> test compiler-rt in relative isolation.
>>
>>
>>
>> It's really hard to draw the line here, even some of compiler-rt unit
>> tests require instrumentation, therefore depend on new features of
>> Clang/LLVM. Unlike builtins, which are
>>
>> trivial to test in isolation, testing sanitizer runtimes in isolation
>> (w/o compiler) is often hard to implement (we tried to do so for TSan, but
>> found unit tests extremely hard to write),
>>
>> and is barely useful - compiler-rt runtimes don't consist of modules
>> (like LLVMCodeGen and LLVMMC for instance), and are never used w/o the
>> compiler anyway.
>>
>>
>>
>>
>> Most importantly, I'd expect only the latter to run in a "make check-all"
>> run, as we do for Clang/LLVM, etc.
>>
>>
>>
>> And now we're getting to the goals :) Why would such a change be good? Do
>> you worry about the time it takes to execute the test suite?
>>
>>
>>
>>
>>
>> Also, there's significant complexity in compiler-rt test suite that
>> narrows the tests executed
>>
>> to those supported by current host.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Feb 10, 2016 at 2:33 PM, Xinliang David Li via cfe-dev <
>> cfe-dev at lists.llvm.org> wrote:
>>
>>
>>
>>
>>
>> On Wed, Feb 10, 2016 at 2:11 PM, Justin Bogner via llvm-dev <
>> llvm-dev at lists.llvm.org> wrote:
>>
>> David Blaikie via cfe-dev <cfe-dev at lists.llvm.org> writes:
>> > Recently had a bit of a digression in a review thread related to some
>> tests
>> > going in to compiler-rt (
>> >
>> http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20160208/330759.html
>> > ) and there seems to be some disconnect at least between my expectations
>> > and reality. So I figured I'd have a bit of a discussion out here on the
>> > dev lists where there's a bit more visibility.
>> >
>> > My basic expectation is that the lit tests in any LLVM project except
>> the
>> > test-suite are targeted tests intended to test only the functionality in
>> > the project. This seems like a pretty well accepted doctrine across most
>> > LLVM projects - most visibly in Clang, where we make a concerted effort
>> not
>> > to have tests that execute LLVM optimizations, etc.
>> >
>> > There are exceptions/middle ground to this - DIBuilder is in LLVM, but
>> > essentially tested in Clang rather than writing LLVM unit tests. It's
>> > somewhat unavoidable that any of the IR building code (IRBuilder,
>> > DIBuilder, IR asm printing, etc) is 'tested' incidentally in Clang in
>> > process of testing Clang's IR generation. But these are seen as
>> incidental,
>> > not intentionally trying to cover LLVM with Clang tests (we don't add a
>> > Clang test if we add a new feature to IRBuilder just to test the
>> IRBuilder).
>> >
>> > Another case with some middle ground are things like linker tests and
>> > objdump, dwarfdump, etc - in theory to isolate the test we would checkin
>> > binaries (or the textual object representation lld had for a while,
>> etc) to
>> > test those tools. Some tests instead checkin assembly and assemble it
>> with
>> > llvm-mc. Again, not to cover llvm-mc, but on the assumption that
>> llvm-mc is
>> > tested, and just using it as a tool to make tests easier to maintain.
>> >
>> > So I was surprised to find that the compiler-rt lit tests seem to
>> diverge
>> > from this philosophy & contain more intentional end-to-end tests (eg:
>> > adding a test there when making a fix to Clang to add a counter to a
>> > function that was otherwise missing a counter - I'd expect that to be
>> > tested in Clang and that there would already be coverage in compiler-rt
>> for
>> > "if a function has a counter, does compiler-rt do the right thing with
>> that
>> > counter" (testing whatever code in compiler-rt needs to be tested)).
>> >
>> > Am I off base here? Are compiler-rt's tests fundamentally different to
>> the
>> > rest of the LLVM project? Why? Should they continue to be?
>>
>> I think there's a bit of grey area in terms testing the runtime -
>> generally it's pretty hard to use the runtime without a fairly
>> end-to-end test, so tests of the runtime often end up looking pretty
>> close to an end-to-end test.
>>
>> That said, I don't think that should be used as an excuse to sneak
>> arbitrary end-to-end tests into compiler-rt. We should absolutely write
>> tests in clang and llvm that we're inputting what we expect to the
>> runtime and try to keep the tests in compiler-rt as focused on just
>> exercising the runtime code as possible.
>>
>>
>>
>> Yes, we should not use compiler-rt tests as an excuse of not adding
>> clang/LLVM test. The latter should always be added if possible -- they are
>> platform independent and is the first level of defense. runtime test's
>> focus is also more on the runtime lib itself and interaction between
>> runtime, compiler, binutils and other tools.
>>
>>
>>
>> David
>>
>>
>> IIUC, the correct place for integration tests in general is somewhere
>> like test-suite, but I think it's a bit intimidating to some people to
>> add new tests there (Are there docs on this?). I suspect some of the
>> profiling related tests in compiler-rt are doing a bit much and should
>> graduate to a spot in the test-suite (but I don't have time to volunteer
>> to do the work, unfortunately).
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>>
>>
>>
>>
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>
>>
>>
>>
>>
>> --
>>
>> Alexey Samsonov
>> vonosmas at gmail.com
>>
>>
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> Alexey Samsonov
>> vonosmas at gmail.com
>>
>>
>>
>>
>> _______________________________________________
>> cfe-dev mailing list
>> cfe-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>>
>> Alexey Samsonov
>> vonosmas at gmail.com
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>>
>>
>>
>>
>>
>>
>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160301/b51601ea/attachment-0001.html>
More information about the llvm-dev
mailing list