[clang-tools-extra] r209141 - Improved llvm-namespace-comment check.

Manuel Klimek klimek at google.com
Tue May 20 23:53:08 PDT 2014


On Wed, May 21, 2014 at 8:46 AM, Alp Toker <alp at nuanti.com> wrote:

>
> On 21/05/2014 09:27, Manuel Klimek wrote:
>
>> On Wed, May 21, 2014 at 7:22 AM, Sean Silva <chisophugis at gmail.com<mailto:
>> chisophugis at gmail.com>> wrote:
>>
>>     On Tue, May 20, 2014 at 7:34 PM, Alexander Kornienko
>>     <alexfh at google.com <mailto:alexfh at google.com>> wrote:
>>
>>         We use lit tests for diagnostic messages. Currently they use
>>         FileCheck, but it probably makes sense to implement an analog
>>         of Clang's -verify in clang-tidy.
>>
>>         As for the tests for fix-its, I consider unit tests to be more
>>         readable for this purpose.
>>
>>
>>     I don't understand this. What could be more readable than
>>     something like:
>>
>>     namespace {
>>     } // namespace asdf
>>     // CHECK:      namespace {
>>     // CHECK-NEXT: } // namespace
>>
>>     ?? I certainly prefer it hands-down to:
>>
>>     +  EXPECT_EQ("namespace {\n"
>>     +            "} // namespace",
>>     +  runCheckOnCode<NamespaceCommentCheck>("namespace {\n"
>>     +        "} // namespace asdf"));
>>
>>     (actually I think FileCheck's default matching of allowing a match
>>     of a partial line could be an issue here, but that can be worked
>>     around with a regex. I think that even adding a CHECK-EXACT
>>     directive or similar to support this in a very readable way might
>>     be worth it for this sort of test)
>>
>>         They are also more isolated and closer to the check itself.
>>
>>
>>     I don't understand this. The check is either 1) performing a
>>     particular source code transformation or 2) providing a diagnostic
>>     message about a questionable construct. In case 1) you want to
>>     verify that the output takes a particular plaintext form, i.e.
>>     exactly what a FileCheck does. In case 2) you can avoid rolling
>>     your own -verify and use the simpler but almost surely sufficient
>>     facility built into FileCheck
>>     http://llvm.org/docs/CommandGuide/FileCheck.html#
>> filecheck-expressions
>>
>>         In a lit test, if we verify the full text of the output, the
>>         FileCheck tests become quite bulky.
>>
>>     Could you give an example? It is generally possible to create
>>     minimal test cases so that the actual output (and the input, in
>>     fact) are quite compact. yaml2obj is probably a worst case in this
>>     respect but we've still found it easy to keep the tests small and
>>     readable. E.g.
>>
>>     http://reviews.llvm.org/diffusion/L/browse/llvm/trunk/
>> test/Object/yaml2obj-elf-symbol-LocalGlobalWeak.yaml
>>     http://reviews.llvm.org/diffusion/L/browse/llvm/trunk/
>> test/Object/yaml2obj-elf-bits-endian.test
>>
>>
>>         Another reason to put these checks in the unit tests, is that
>>         each new lit test file has non-trivial run-time overhead.
>>
>>
>>     Personally I've never found myself waiting on an incremental lit
>>     test run (i.e. rerunning a particular failing test or relevant
>>     test directory for the code I'm working on). I *have* found myself
>>     waiting significant amounts of time for a unittest (gtest) compile
>>     and link in that same scenario.
>>
>>
>> Interesting. I have pretty much the reverse experience. I think it's
>> mainly because running a single gtest is very simple
>> (--gtest_filter=*MyTest*) while running a single lit test is a nightmare.
>> Even worse, if you have a FileCheck test that tests many different things
>> (take for example the static analyzer tests, which I happen to have cursed
>> at in the past weeks), the only way I found to isolate a single cause for
>> debugging is to copy-and-paste the code. That's really ineffective. On top
>> of that, the results of FileCheck tests are often really hard to figure out
>> (you have to basically look at the command, figure out which file is
>> actually checked, which is often a copy of a copy of a file somewhere, and
>> then use regexps on that file to see what might have changed so that
>> whatever was not found didn't appear).
>>
>
> FileCheck isn't too nice for this but I agree with Sean that using -verify
> would win hands down.
>
> And keep in mind that not all developers are using the unit test
> infrastructure, whereas lit tests are the best way to cover user-facing
> features because they always get run. In fact at work I'm probably the only
> one building gtest at all, and then that's only so as not to break the
> build.
>
>
>
>> I grant you that FileCheck tests are nicer to read if the tests are at
>> the integration level anyway, though, which makes the balance often a tough
>> call (for example, for clang-format we've found that we'd much rather have
>> FileCheck tests for most tests nowadays, but haven't gotten around to
>> converting them yet). So I'm actually with you in this specific case. I'd
>> want to have people most probably maintaining the code to have some headway
>> here, as insufficient tests will affect them the most.
>>
>> And finally, you're probably not trying to run the lit stuff on windows,
>> where starting a process has a significant overhead.
>>
>
> Heh, I should re-post the in-process lit backend for Windows some time,
> you're right it's really slow without that!
>

I'd also love an in-process lit backend for linux, btw. Please cc' me on
the patches :)


>
> Alp.
>
>
>> Cheers,
>> /Manuel
>>
>>
>>     -- Sean Silva
>>
>>
>>
>>         On Tue, May 20, 2014 at 7:32 PM, Sean Silva
>>         <chisophugis at gmail.com <mailto:chisophugis at gmail.com>> wrote:
>>
>>             Is there a particular reason that this is being tested
>>             with unittests instead of the usual lit tests? This kind
>>             of thing seems like it would be a lot more readable as a
>>             lit test.
>>
>>             -- Sean Silva
>>
>>
>>
>>     _______________________________________________
>>     cfe-commits mailing list
>>     cfe-commits at cs.uiuc.edu <mailto:cfe-commits at cs.uiuc.edu>
>>     http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits
>>
>>
>>
>>
>>
>> _______________________________________________
>> cfe-commits mailing list
>> cfe-commits at cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits
>>
>
> --
> http://www.nuanti.com
> the browser experts
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20140521/326a5e91/attachment.html>


More information about the cfe-commits mailing list