[clang-tools-extra] r209141 - Improved llvm-namespace-comment check.

Manuel Klimek klimek at google.com
Thu May 22 01:04:38 PDT 2014


On Thu, May 22, 2014 at 1:00 AM, Sean Silva <chisophugis at gmail.com> wrote:

>
>
>
> On Wed, May 21, 2014 at 12:27 AM, Manuel Klimek <klimek at google.com> wrote:
>
>> On Wed, May 21, 2014 at 7:22 AM, Sean Silva <chisophugis at gmail.com>wrote:
>>
>>> On Tue, May 20, 2014 at 7:34 PM, Alexander Kornienko <alexfh at google.com>wrote:
>>>
>>>> We use lit tests for diagnostic messages. Currently they use FileCheck,
>>>> but it probably makes sense to implement an analog of Clang's -verify in
>>>> clang-tidy.
>>>>
>>>> As for the tests for fix-its, I consider unit tests to be more readable
>>>> for this purpose.
>>>>
>>>
>>> I don't understand this. What could be more readable than something like:
>>>
>>> namespace {
>>> } // namespace asdf
>>> // CHECK:      namespace {
>>> // CHECK-NEXT: } // namespace
>>>
>>> ?? I certainly prefer it hands-down to:
>>>
>>> +  EXPECT_EQ("namespace {\n"
>>> +            "} // namespace",
>>> +            runCheckOnCode<NamespaceCommentCheck>("namespace {\n"
>>> +                                                  "} // namespace
>>> asdf"));
>>>
>>> (actually I think FileCheck's default matching of allowing a match of a
>>> partial line could be an issue here, but that can be worked around with a
>>> regex. I think that even adding a CHECK-EXACT directive or similar to
>>> support this in a very readable way might be worth it for this sort of test)
>>>
>>>
>>>> They are also more isolated and closer to the check itself.
>>>>
>>>
>>> I don't understand this. The check is either 1) performing a particular
>>> source code transformation or 2) providing a diagnostic message about a
>>> questionable construct. In case 1) you want to verify that the output takes
>>> a particular plaintext form, i.e. exactly what a FileCheck does. In case 2)
>>> you can avoid rolling your own -verify and use the simpler but almost
>>> surely sufficient facility built into FileCheck
>>> http://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-expressions
>>>
>>>
>>>> In a lit test, if we verify the full text of the output, the FileCheck
>>>> tests become quite bulky.
>>>>
>>>
>>> Could you give an example? It is generally possible to create minimal
>>> test cases so that the actual output (and the input, in fact) are quite
>>> compact. yaml2obj is probably a worst case in this respect but we've still
>>> found it easy to keep the tests small and readable. E.g.
>>>
>>>
>>> http://reviews.llvm.org/diffusion/L/browse/llvm/trunk/test/Object/yaml2obj-elf-symbol-LocalGlobalWeak.yaml
>>>
>>> http://reviews.llvm.org/diffusion/L/browse/llvm/trunk/test/Object/yaml2obj-elf-bits-endian.test
>>>
>>>
>>> Another reason to put these checks in the unit tests, is that each new
>>>> lit test file has non-trivial run-time overhead.
>>>>
>>>
>>> Personally I've never found myself waiting on an incremental lit test
>>> run (i.e. rerunning a particular failing test or relevant test directory
>>> for the code I'm working on). I *have* found myself waiting significant
>>> amounts of time for a unittest (gtest) compile and link in that same
>>> scenario.
>>>
>>
>> Interesting. I have pretty much the reverse experience. I think it's
>> mainly because running a single gtest is very simple
>> (--gtest_filter=*MyTest*) while running a single lit test is a nightmare.
>>
>
> Interesting. Are you aware that you can do `$BUILD/bin/llvm-lit
> $SRC/test/Foo` or `$BUILD/bin/llvm-lit $SRC/test/Foo/foobar.test` and it
> will automatigically run all tests in the indicated directory, or just the
> selected test? Until I learned that, I found it a nightmare, but it's quite
> pleasant once that trick is known. My typical iteration with yaml2obj was
> something like
>
> $ ninja ./bin/yaml2obj && ./bin/llvm-lit ../llvm/test/Object/yaml2obj*
>

Oh, that's awesome. I somehow assumed one needs to keep in the build
directory ... Thanks!


> Even worse, if you have a FileCheck test that tests many different things
>> (take for example the static analyzer tests, which I happen to have cursed
>> at in the past weeks), the only way I found to isolate a single cause for
>> debugging is to copy-and-paste the code.
>>
>
> Well, tests that test many different things are a pain independently of
> the testing methodology or setup ;)
>

Yes, but it seems like that's very easy to do, and very common with
FileCheck based tests (you just put more and more related things into the
same test.cc file).


>  That's really ineffective. On top of that, the results of FileCheck tests
>> are often really hard to figure out (you have to basically look at the
>> command, figure out which file is actually checked, which is often a copy
>> of a copy of a file somewhere, and then use regexps on that file to see
>> what might have changed so that whatever was not found didn't appear).
>>
>> I grant you that FileCheck tests are nicer to read if the tests are at
>> the integration level anyway, though, which makes the balance often a tough
>> call (for example, for clang-format we've found that we'd much rather have
>> FileCheck tests for most tests nowadays, but haven't gotten around to
>> converting them yet).
>>
>
> Interesting. I thought that clang-format was a natural case for unittests
> due to the need for juggling the options and styles for different
> verifications, which would be a pain with FileCheck.
>

Passing options is pretty easy. Verification is usually "we want it to look
exactly like that". Perhaps we'd not use FileCheck directly, but build a
small wrapper script that does something simpler (or just use diff ;)


>  So I'm actually with you in this specific case. I'd want to have people
>> most probably maintaining the code to have some headway here, as
>> insufficient tests will affect them the most.
>>
>> And finally, you're probably not trying to run the lit stuff on windows,
>> where starting a process has a significant overhead.
>>
>
> I admit I'm not on windows, but I hear the overhead there is bad;
> hopefully lit can be improved to make things speedier there. It must be
> really bad if it negates the parallelism advantage from lit (IIRC our gtest
> configuration does not run tests in parallel).
>

Yea, but I've found most unit tests to be pretty fast (apart from some of
the ones I authored in debug mode :P).


>
> -- Sean Silva
>
>
>>
>> Cheers,
>> /Manuel
>>
>>
>>>
>>> -- Sean Silva
>>>
>>>
>>>>
>>>> On Tue, May 20, 2014 at 7:32 PM, Sean Silva <chisophugis at gmail.com>wrote:
>>>>
>>>>> Is there a particular reason that this is being tested with unittests
>>>>> instead of the usual lit tests? This kind of thing seems like it would be a
>>>>> lot more readable as a lit test.
>>>>>
>>>>> -- Sean Silva
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> cfe-commits mailing list
>>> cfe-commits at cs.uiuc.edu
>>> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20140522/5e58e7e4/attachment.html>


More information about the cfe-commits mailing list