[PATCH] D14706: [lit] Implement support of per test timeout in lit.

Dan Liew via llvm-commits llvm-commits at lists.llvm.org
Mon Nov 16 10:32:55 PST 2015


delcypher added inline comments.

================
Comment at: utils/lit/tests/Inputs/per_test_timeout/slow.py:8
@@ +7,2 @@
+print("slow program")
+time.sleep(6)
----------------
jroelofs wrote:
> delcypher wrote:
> > If I did `lit.py tests/Inputs/per_test_timeout/` lit would try to run the `.py` scripts (in addition to the `.txt`) which would hang. 
> > 
> > I've changed `lit.cfg` just to run the `.txt` scripts. I was mistaken about needing `lit.cfg` trickery here.
> > I thought that would also prevent the `.py` scripts from being in the `.txt` test cases that invoke lit (which is why I mentioned the `lit.cfg` trickery) but if you manually specify the test case names it doesn't seem to matter that `.py`` isn't in `config.suffixes`.
> > 
> > The latest patch I just uploaded should address this.
> Ohh, I see what you were running into now. I don't think we have to worry about running lit on that particular test directory without the requisite `--timeout` flag.
> 
> > I've changed lit.cfg just to run the .txt scripts.
> 
> Running `lit.py tests/Inputs/per_test_timeout/` **should** hang, but running `lit.py tests/Inputs/per_test_timeout/ --timeout=1` should not.
If this is the case why did the patch you sent to post with the subject `[lit] RFC: Per test timeout` have `timeout.py` invoke lit inside lit with the `--timeout` flag? If you expect the caller to use `--timeout=` then having an additional test case that invokes lit on the other test cases with that flag seems redundant.

In my current implementation the "lit invoking lit" (`timeout_*.txt` tests) tests are not redundant because they invoke lit in three different ways

* Use internal shell, set timeout on command line
* Use external shell, set timeout on command line
* Use internal shell, set timeout in configuration file (i.e. don't use command line flag)

I can revert the small change I made to `lit.py` but this begs the question

* Who is running these tests?
* How does the person running these tests know how to run them correctly and interpret their results?


http://reviews.llvm.org/D14706





More information about the llvm-commits mailing list