[lldb-dev] [llvm-dev] [cfe-dev] RFC: End-to-end testing
Robinson, Paul via lldb-dev
lldb-dev at lists.llvm.org
Thu Oct 10 08:01:41 PDT 2019
> -----Original Message-----
> From: llvm-dev <llvm-dev-bounces at lists.llvm.org> On Behalf Of David Greene
> via llvm-dev
> Sent: Wednesday, October 09, 2019 9:17 PM
> To: Mehdi AMINI <joker.eph at gmail.com>
> Cc: llvm-dev at lists.llvm.org; cfe-dev at lists.llvm.org; openmp-
> dev at lists.llvm.org; lldb-dev at lists.llvm.org
> Subject: Re: [llvm-dev] [cfe-dev] RFC: End-to-end testing
>
> Mehdi AMINI via cfe-dev <cfe-dev at lists.llvm.org> writes:
>
> > I don't think these particular tests are the most controversial though,
> and
> > it is really still fairly "focused" testing. I'm much more curious about
> > larger end-to-end scope: for instance since you mention debug info and
> > LLDB, what about a test that would verify that LLDB can print a
> particular
> > variable content from a test that would come as a source program for
> > instance. Such test are valuable in the absolute, it isn't clear to me
> that
> > we could in practice block any commit that would break such test though:
> > this is because a bug fix or an improvement in one of the pass may be
> > perfectly correct in isolation but make the test fail by exposing a bug
> > where we are already losing some debug info precision in a totally
> > unrelated part of the codebase.
> > I wonder how you see this managed in practice: would you gate any change
> on
> > InstCombine (or other mid-level pass) on not regressing any of the
> > debug-info quality test on any of the backend, and from any frontend
> (not
> > only clang)? Or worse: a middle-end change that would end-up with a
> > slightly different Dwarf construct on this particular test, which would
> > trip LLDB but not GDB (basically expose a bug in LLDB). Should we
> require
> > the contributor of inst-combine to debug LLDB and fix it first?
>
> Good questions! I think for situations like this I would tend toward
> allowing the change and the test would alert us that something else is
> wrong. At that point it's probably a case-by-case decision. Maybe we
> XFAIL the test. Maybe the fix is easy enough that we just do it and the
> test starts passing again. What's the policy for breaking current tests
> when the change itself is fine but exposes a problem elsewhere (adding
> an assert, for example)?
For debug info in particular, we already have the debuginfo-tests project,
which is separate because it requires executing the test program; this is
something the clang/llvm test suites specifically do NOT require. There
is of course also the LLDB test suite, which I believe can be configured
to use the just-built clang to compile its test programs.
Regarding breakage policy, it's just like anything else: do what's needed
to make the bots happy. What exactly that means will depend on the exact
situation. I can cite a small patch that was held off for a ridiculously
long time, like around a year, because Chromium had some environmental
problem that they were slow to address. That wasn't even an LLVM bot!
But eventually it got sorted out and our patch went in.
My point here is, that kind of thing happens already, adding a new e2e
test project won't inherently change any policy or how the community
responds to breakage.
--paulr
More information about the lldb-dev
mailing list