[PATCH] lit: Incremental test scheduling
Alp Toker
alp at nuanti.com
Mon Oct 28 15:37:23 PDT 2013
On 28/10/2013 02:04, Daniel Dunbar wrote:
> I like the functionality, but I will have to think some more about the
> implementation.
Will be giving that patch a rework as there was plenty of interest, but
don't want to add much more complexity as it's just a quick fix, see below..
>
> I have already considered having lit maintain some sort of database of
> test results. That would be useful for reporting things like flaky
> tests, reporting performance test changes, and for this feature.
>
> The other thing that would be really nice to open the door for would
> be if lit could detect which tools have changed and only rerun tests
> for those tools. Implementing this feature by having lit keep a
> database of test results might provide a base for that...
I have an implementation of this with complete validation and dependency
scanning for lit. So far it's working really well and means we can get a
complete set of test run results following a changeset often in seconds.
The feature uses lit as a generator, after which the tests are driven
with ninja instead of Python. The main win is that subsequent runs only
need to run tests for changed dependencies in order to ascertain a full
set of test results.
I've been committing the underlying fixes needed to make this work over
the last week and plan to set up a builder using it shortly.
The new patchset enhances the integrated lit shell to work as a
dependency scanner, and also implements nice clang-style coloured
diagnostics with source locations for lit :-)
Alp.
>
> I'll think some more about this and get back to you with some
> more concrete ideas next week.
>
> - Daniel
>
> On Sunday, October 27, 2013, Andrew Trick wrote:
>
> I really want this feature. It may not be clean, but would save a
> lot of time in practice. However...
>
> On Oct 24, 2013, at 10:35 PM, Alp Toker <alp at nuanti.com
> <javascript:_e({}, 'cvml', 'alp at nuanti.com');>> wrote:
>
>> lit shell tests and compiled unit tests are both supported. At
>> the moment, the cache is implemented by scanning and modifying
>> test source mtimes directly. This hasn't caused trouble but the
>> technique could be refined in future to store test failures in a
>> separate cache file if needed.
>
> This doesn’t feel right. I don’t think it will personally cause me
> any grief, but seems like someone or their tools will be confused.
> If others think it’s ok, I won’t object. I just don’t want to
> accept the patch until others review this aspect.
>
> -Andy
>
>
>
> --
> - Daniel
--
http://www.nuanti.com
the browser experts
More information about the llvm-commits
mailing list