[PATCH] lit: Incremental test scheduling

Alp Toker alp at nuanti.com
Fri Oct 25 00:00:33 PDT 2013


On 25/10/2013 06:42, David Blaikie wrote:
> Sounds interesting (& lighter weight than a full code
> coverage/dependency based solution).

Interesting you mention using a dependency graph because that's exactly
what I tried first, using ninja internals directly for the stat cache.

This ended up being pretty custom: making a libninja shared library,
writing a C ninja API and accessing it from Python as a subtool. It
worked but ultimately led to this simpler solution achieving more or
less the same thing.

> How does this interact with Ninja's functionality of caching the whole
> output of a command before printing? Are you working around that in
> some way?

Running llvm-lit directly instead of the ninja check target get around
that :-\

(I guess the solution on the ninja side, short of pushing the libninja
features upstream, would be to implement a pool flag that permits live
stderr output for depth = 1)

Alp.


>
>
> On Thu, Oct 24, 2013 at 10:35 PM, Alp Toker <alp at nuanti.com
> <mailto:alp at nuanti.com>> wrote:
>
>     Incremental testing is useful when you're writing code and need to
>     answer these questions fast:
>
>       * Did my recent change fix tests that were previously failing?
>       * Are the tests I just modified passing?
>
>
>     A standard check-clang run takes around 40 seconds on my system,
>     and needs to be run repeatedly while working on a new feature or
>     bug fix. Most of that is time spent waiting.
>
>     With the attached patch, lit instead runs failing and recently
>     modified tests first. This way, I'm likely to get an answer to
>     both questions in under one second, letting me get straight back
>     to fixing the issue.
>
>     *Workflow**
>     *
>     After fixing the code and running lit again, if there are still
>     failures at the start I'll usually hit Ctrl-C, try to fix the
>     issue and repeat until all failures are resolved.
>
>     The feature isn't just for local testing -- on our internal clang
>     build server, LLVM test failures or fixes often get detected
>     within 10 seconds of a commit thanks to ccache/cmake/ninja
>     together with this patch.
>
>     *Performance
>
>     *The additional ordering overhead and cache updates come at a cost
>     of about 2 seconds on top of a 40 second run.
>
>     So enabling the feature depends on your workflow and whether
>     you're able to act upon the early results.
>
>     For example, the Buildbot version used on llvm.org
>     <http://llvm.org> doesn't send failure notifications until the
>     test run is complete so the feature would only be marginally
>     useful there until Buildbot gets early notification support.
>     *
>     Implementation notes*
>
>     lit shell tests and compiled unit tests are both supported. At the
>     moment, the cache is implemented by scanning and modifying test
>     source mtimes directly. This hasn't caused trouble but the
>     technique could be refined in future to store test failures in a
>     separate cache file if needed.
>
>     *Status**
>     *
>     Not yet tested on Windows. Let me know if you find this useful!
>
>     Alp.
>
>     -- 
>     http://www.nuanti.com
>     the browser experts
>
>
>     _______________________________________________
>     llvm-commits mailing list
>     llvm-commits at cs.uiuc.edu <mailto:llvm-commits at cs.uiuc.edu>
>     http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>
>

-- 
http://www.nuanti.com
the browser experts

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20131025/11ae2836/attachment.html>


More information about the llvm-commits mailing list