[LLVMdev] [llvm-commits] [PATCH] A "very verbose" mode for FileCheck

Daniel Dunbar daniel at zuster.org
Thu Jan 17 12:20:35 PST 2013


On Thu, Jan 17, 2013 at 12:11 PM, David Blaikie <dblaikie at gmail.com> wrote:

> On Thu, Jan 17, 2013 at 11:19 AM, Daniel Dunbar <daniel at zuster.org> wrote:
> > On Thu, Jan 17, 2013 at 10:59 AM, David Blaikie <dblaikie at gmail.com>
> wrote:
> >>
> >> On Thu, Jan 17, 2013 at 10:51 AM, Daniel Dunbar <daniel at zuster.org>
> wrote:
> >> > Note that as far as places to put temporary files, the right place to
> >> > put
> >> > them is alongside the other test outputs in the test output "sandbox"
> >> > directory.
> >> >
> >> > Somewhat orthogonal, but we should also fix up lit to purge those
> >> > sandboxes
> >> > before it starts a new test run.
> >>
> >> SGTM - though if we are going to go with the plan/features you've
> >> outlined, it might not be so unreasonable to, rather than tunnelling
> >> the log files through the lit output & splitting them back out, do the
> >> work to actually gather those log files directly (it's a bit more work
> >> in the buildbot configuration - enough that I didn't think I could
> >> justify it given Dmitri's original proposal - but seems like it would
> >> simplify this change & leave us roughly where I was discussing earlier
> >> (though your suggestion of generalizing this over all of lit, rather
> >> than just FileCheck is a step beyond what I'd proposed - it sounds
> >> good/right though))
> >
> >
> > I'm fine not having the files dumped into the output, but I at least want
> > the buildbot to get the names of the files to collect from the lit
> output,
> > not just by scanning the folder.
>
> While I'd tend to agree - is there a particular reason you're not in
> favor of collecting from the folder?
>

Just explicit vs implicit, I want the test output to be explicit about what
files are related and the buildbot only to pick up things it was explicitly
told about.

Also, I don't want an implicit hidden dependency between the buildbot
runners "knowing" how lit lays out its test results.

(one reason I can think of is that it might mean we'd lose (or have to
> work harder to maintain) the association between test files and
> temporary files - but we do name the temp files in a structured way
> relative to the test file name, so that doesn't seem terribly hard (&
> we'll have to use a similar naming structure for the output file names
> on the buildbot anyway, to make it easier to pick out the output files
> associated with a failure)
>
> >> I asked about how to do this in the freenode buildbot channel & they
> >> mentioned that it is possible to name log files dynamically to be
> >> retrieved from the slave - I haven't looked into it in detail because
> >> it did sound a bit more complicated, but should be achievable.
> >
> >
> > Yes, it is possible. I can help with the buildbot implementation if need
> be.
>
> Yep, mightn't hurt.
>
> >> The drawback to your approach is that we'd have to enable this feature
> >> unconditionally - rather than having the optimization advantage of
> >> only dumping files on failure (see my question earlier in the thread,
> >> Eli's concern that dumping output would be expensive, and Dmitri's
> >> response that we'd only be dumping on failure anyway). Given that it
> >> seems the vast majority of our failures aren't flakey, we could have
> >> lit setup to rerun failures in a "create all the temporary files"
> >> mode, though missing flakes would be unfortunate.
> >
> >
> > In my experience, there isn't actually that much performance difference
> (and
> > sometimes a loss) of using a file instead of a pipe. With all the other
> > stuff going on in the test suite I would be surprised if this added much
> > impact. Note that I personally often write my tests to use temporary
> files
> > instead of pipes anyway just because I like how the RUN lines look, so
> its
> > not exactly like we aren't already storing many of these files.
>
> Fair enough.
>
> Eli - any counterargument/other views on this issue? (since you'd
> mentioned some concern previously)
>
> > There would be some code simplification advantages internally to lit too
> if
> > it avoided actually using pipes, although I'm not sure I want to go as
> far
> > as dropping that support.
> >
> > If it is measurable, then my proposal is to do as you say and just rerun
> the
> > test with the collection enabled. I actually think that lit by default
> > should rerun failing tests so that it can diagnose flaky tests, so this
> fits
> > in with that.
>
> SGTM


Cool!

 - Daniel


> >
> >  - Daniel
> >
> >
> >>
> >> >
> >> >  - Daniel
> >> >
> >> >
> >> > On Wed, Jan 16, 2013 at 3:31 PM, Dmitri Gribenko <gribozavr at gmail.com
> >
> >> > wrote:
> >> >>
> >> >> On Thu, Jan 17, 2013 at 1:23 AM, Evgeniy Stepanov
> >> >> <eugeni.stepanov at gmail.com> wrote:
> >> >> > On Thu, Jan 17, 2013 at 1:19 AM, Dmitri Gribenko
> >> >> > <gribozavr at gmail.com>
> >> >> > wrote:
> >> >> >> On Wed, Jan 16, 2013 at 9:33 PM, Dmitri Gribenko
> >> >> >> <gribozavr at gmail.com>
> >> >> >> wrote:
> >> >> >>> On Wed, Jan 16, 2013 at 9:24 PM, Chris Lattner <
> clattner at apple.com>
> >> >> >>> wrote:
> >> >> >>>>
> >> >> >>>> On Jan 16, 2013, at 10:32 AM, Dmitri Gribenko
> >> >> >>>> <gribozavr at gmail.com>
> >> >> >>>> wrote:
> >> >> >>>>
> >> >> >>>>> Hello,
> >> >> >>>>>
> >> >> >>>>> When someone breaks a FileCheck-based test on some buildbot,
> >> >> >>>>> sometimes
> >> >> >>>>> it may not be obvious *why* did it fail.  If the failure can
> not
> >> >> >>>>> be
> >> >> >>>>> reproduced locally, it can be very hard to fix.
> >> >> >>>>>
> >> >> >>>>> I propose adding a "very verbose" mode to FileCheck.  In this
> >> >> >>>>> mode
> >> >> >>>>> FileCheck will dump the input file in case of failure.  This
> mode
> >> >> >>>>> will
> >> >> >>>>> be enabled by an environment variable "FILECHECK_VERY_VERBOSE".
> >> >> >>>>> If
> >> >> >>>>> we
> >> >> >>>>> chose a command line option, we would have to edit all
> >> >> >>>>> FileCheck-based
> >> >> >>>>> tests to use %FileCheck.
> >> >> >>>>
> >> >> >>>> I think that this idea is good, but I'd prefer it be
> implemented a
> >> >> >>>> different way:
> >> >> >>>>
> >> >> >>>>  - Filecheck should take a new flag -dump-input-on-error that
> >> >> >>>> causes
> >> >> >>>> it to... dump the input file on error.
> >> >> >>>>  - Lit should be the thing that checks the environment (or
> perhaps
> >> >> >>>> add a new option to lit), and adds the flag to FileCheck
> >> >> >>>> invocations.
> >> >> >>>>
> >> >> >>>> I don't like it when the behavior of such a low-level tool like
> >> >> >>>> this
> >> >> >>>> changes based on environment variables.  It isn't discoverable
> in
> >> >> >>>> --help.
> >> >> >>>> If for some reason, it is bad for lit to implicitly pass the
> >> >> >>>> option, I'd
> >> >> >>>> rather have a standard FILECHECK_COMMANDLINE environment
> variable,
> >> >> >>>> and have
> >> >> >>>> filecheck parse arbitrary options out of it using the
> >> >> >>>> cl::ParseEnvironmentOptions function.
> >> >> >>>
> >> >> >>> I agree that a command line option would be better.  But in that
> >> >> >>> case
> >> >> >>> all tests should be updated.  It is not an issue for me -- it is
> >> >> >>> mostly mechanical.  So should I change tests to use %FileCheck?
> >> >> >>
> >> >> >> Here's a third attempt.
> >> >> >>
> >> >> >> The new behavior is as follows:
> >> >> >>
> >> >> >> 1. In case of errors we always dump output to a temporary file and
> >> >> >> print
> >> >> >
> >> >> > Does it mean we get one more file in /tmp every time a test fails,
> >> >> > and
> >> >> > it is not cleaned up automatically? I don't think this should
> happen
> >> >> > in the "default" mode of the tool.
> >> >>
> >> >> Well, yes.  David requested that and I agreed that it is a good idea.
> >> >> Are you strongly opposed to it?
> >> >>
> >> >> Dmitri
> >> >>
> >> >> --
> >> >> main(i,j){for(i=2;;i++){for(j=2;j<i;j++){if(!(i%j)){j=0;break;}}if
> >> >> (j){printf("%d\n",i);}}} /*Dmitri Gribenko <gribozavr at gmail.com>*/
> >> >>
> >> >> _______________________________________________
> >> >> llvm-commits mailing list
> >> >> llvm-commits at cs.uiuc.edu
> >> >> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
> >> >
> >> >
> >> >
> >> > _______________________________________________
> >> > llvm-commits mailing list
> >> > llvm-commits at cs.uiuc.edu
> >> > http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
> >> >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130117/c34ddb7b/attachment.html>


More information about the llvm-dev mailing list