[clang-tools-extra] r209141 - Improved llvm-namespace-comment check.

Sean Silva chisophugis at gmail.com
Thu May 22 20:37:28 PDT 2014


On Thu, May 22, 2014 at 2:06 AM, Manuel Klimek <klimek at google.com> wrote:

> On Thu, May 22, 2014 at 1:09 AM, Sean Silva <chisophugis at gmail.com> wrote:
>
>>
>>
>>
>> On Wed, May 21, 2014 at 6:51 AM, Alexander Kornienko <alexfh at google.com>wrote:
>>
>>> On Wed, May 21, 2014 at 7:22 AM, Sean Silva <chisophugis at gmail.com>wrote:
>>>
>>>> On Tue, May 20, 2014 at 7:34 PM, Alexander Kornienko <alexfh at google.com
>>>> > wrote:
>>>>
>>>>> We use lit tests for diagnostic messages. Currently they use
>>>>> FileCheck, but it probably makes sense to implement an analog of Clang's
>>>>> -verify in clang-tidy.
>>>>>
>>>>> As for the tests for fix-its, I consider unit tests to be more
>>>>> readable for this purpose.
>>>>>
>>>>
>>>> I don't understand this. What could be more readable than something
>>>> like:
>>>>
>>>> namespace {
>>>> } // namespace asdf
>>>> // CHECK:      namespace {
>>>> // CHECK-NEXT: } // namespace
>>>>
>>>> ?? I certainly prefer it hands-down to:
>>>>
>>>> +  EXPECT_EQ("namespace {\n"
>>>> +            "} // namespace",
>>>> +            runCheckOnCode<NamespaceCommentCheck>("namespace {\n"
>>>> +                                                  "} // namespace
>>>> asdf"));
>>>>
>>>> (actually I think FileCheck's default matching of allowing a match of a
>>>> partial line could be an issue here, but that can be worked around with a
>>>> regex.
>>>>
>>>
>>> Yes, it can be worked around. And the resulting test would look similar
>>> to this:
>>>
>>> // RUN: grep -Ev "// *[A-Z-]+:" %s > %t.cpp
>>> // RUN: clang-tidy %t.cpp -checks='-*,llvm-namespace-comment' -fix --
>>> // RUN: FileCheck -input-file=%t.cpp -strict-whitespace %s
>>> namespace {
>>> } // namespace asdf
>>> // CHECK-NOT: {{.*}}
>>>  // CHECK: {{^namespace {$}}
>>> // CHECK-NEXT: {{^} // namespace$}}
>>> // CHECK-NOT: {{.*}}
>>>
>>>
>>> This can be simplified by extracting all the RUN: commands to a script
>>> (e.g. check_clang_tidy_fix.sh, which is used
>>> in redundant-smartptr-get-fix.cpp), but then the test will require shell,
>>> which iiuc won't allow it to run on Windows, for example.
>>>
>>> And if you compare this version with the unit test above, and consider
>>> also factors like portability, performance, convenience of running a single
>>> test and debugging it, etc., what would you prefer? I would definitely
>>> prefer the unit-test version.
>>>
>>
>> I agree, with the current state of FileCheck, that is not pretty.
>>
>> However, I think that by using CHECK-LABEL and adding two features to
>> FileCheck it could be pretty much as simple and clean as I described. The
>> two features would be to have ignoring CHECK lines themselves be built-in
>> to FileCheck (IIRC this has been needed in the past for clang-modernize so
>> this feature would be a nice addition). A CHECK-EXACT would make matching
>> the exact text a lot cleaner. But yeah, that requires some work on
>> FileCheck.
>>
>>
>>>
>>>  I think that even adding a CHECK-EXACT directive or similar to support
>>>> this in a very readable way might be worth it for this sort of test)
>>>>
>>>>
>>>>> They are also more isolated and closer to the check itself.
>>>>>
>>>>
>>>> I don't understand this.
>>>>
>>>
>>> The less code we run, the easier it is to isolate the problem. I don't
>>> say that the integration tests are not needed, but for the fix-it scenario
>>> unit tests are imo better for the reasons I mentioned above.
>>>
>>>
>>>>  The check is either 1) performing a particular source code
>>>> transformation or 2) providing a diagnostic message about a questionable
>>>> construct. In case 1) you want to verify that the output takes a particular
>>>> plaintext form, i.e. exactly what a FileCheck does. In case 2) you can
>>>> avoid rolling your own -verify and use the simpler but almost surely
>>>> sufficient facility built into FileCheck
>>>> http://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-expressions
>>>>
>>>
>>> The FileCheck is easy to use, but it is too permissive. It requires
>>> substantial effort to write stricter tests, that don't allow, for example,
>>> extra warnings in the output or warnings being placed on a wrong line.
>>>
>>>
>>>>
>>>>
>>>>> In a lit test, if we verify the full text of the output, the FileCheck
>>>>> tests become quite bulky.
>>>>>
>>>>
>>>> Could you give an example?
>>>>
>>>
>>> See the corrected version of your example above.
>>>
>>>
>>>>  It is generally possible to create minimal test cases so that the
>>>> actual output (and the input, in fact) are quite compact. yaml2obj is
>>>> probably a worst case in this respect but we've still found it easy to keep
>>>> the tests small and readable. E.g.
>>>>
>>>>
>>>> http://reviews.llvm.org/diffusion/L/browse/llvm/trunk/test/Object/yaml2obj-elf-symbol-LocalGlobalWeak.yaml
>>>>
>>>> http://reviews.llvm.org/diffusion/L/browse/llvm/trunk/test/Object/yaml2obj-elf-bits-endian.test
>>>>
>>>>
>>>> Another reason to put these checks in the unit tests, is that each new
>>>>> lit test file has non-trivial run-time overhead.
>>>>>
>>>>
>>>> Personally I've never found myself waiting on an incremental lit test
>>>> run (i.e. rerunning a particular failing test or relevant test directory
>>>> for the code I'm working on). I *have* found myself waiting significant
>>>> amounts of time for a unittest (gtest) compile and link in that same
>>>> scenario.
>>>>
>>>
>>> This depends on what you change. If you change just the test, then yes,
>>> lit tests don't require rebuilding binaries. But if you change the tested
>>> code, you wait for recompilation anyway.
>>>
>>
>> I guess the tradeoffs depend here on the nature of the executable. If you
>> are having to link all of clang, then you're going to have a big link for
>> your test executable regardless so I guess the unittests aren't too much of
>> a drag. My experience with yaml2obj, which is a relatively light program,
>> was that it allowed an extremely fast rebuild and link compared to the
>> O(all unittests + gtest) link.
>>
>
> I usually just build the unit tests I'm about to run? My cycle looks like:
> ninja MyUnitTests && ./path/to/MyUnitTests
>

I'll need to reanalyze the situation. IIRC somewhere in my past I tracked
down a really significant wait on linking gtest itself (independently of
any actual tests, which in this case add even more time since all tests are
linked into the same executable), and I think that experience has made me a
bit gtest averse. Out of curiosity, with a typical clang-tidy or
clang-format unittest, how long does "ninja MyUnitTests &&
./path/to/MyUnitTests" take on your machine? I think part of my problem may
be that my machine here at home might be in need of an upgrade...

-- Sean Silva


>
>>
>> -- Sean Silva
>>
>>
>>>
>>>
>>>
>>>>
>>>> -- Sean Silva
>>>>
>>>>
>>>>>
>>>>> On Tue, May 20, 2014 at 7:32 PM, Sean Silva <chisophugis at gmail.com>wrote:
>>>>>
>>>>>> Is there a particular reason that this is being tested with unittests
>>>>>> instead of the usual lit tests? This kind of thing seems like it would be a
>>>>>> lot more readable as a lit test.
>>>>>>
>>>>>> -- Sean Silva
>>>>>>
>>>>>
>>>
>>
>> _______________________________________________
>> cfe-commits mailing list
>> cfe-commits at cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20140522/498c75d6/attachment.html>


More information about the cfe-commits mailing list