[cfe-commits] Patch for review: enhancements/fixes to -verify
Andy Gibbs
andyg1001 at hotmail.co.uk
Tue Jun 12 02:35:47 PDT 2012
On Tuesday, June 12, 2012 1:58 AM, Richard Smith wrote:
>> The way around would be...
>>
>> // expected-error at +2 {{should have foo}}
>> #if !__has_feature(foo)
>> #error should have foo
>> #endif
>>
>> Of course, it does require that the test-case writer will think
>> to do this, in which case the unexpected no-op might not be
>> written in the first place. As mentioned in my response to
>> Jordan Rose, perhaps this is where some test-case-writing
>> guidelines on the clang website might come in handy. (I'm not
>> particularly volunteering here; I expect there will be clang
>> veterans much better suited to this.)
>
> Right, my point was not that it's hard to test, but that it's
> dangerous: it adds to the list of things we need to be careful
> about. I'm also concerned that we may have existing tests which
> do something like that, which this change would neuter. If not,
> then I'm happy to accept that this doesn't happen in practice.
Ok, I understand your point and it is impossible to cast-iron guarantee that
no-one in future will write a test in this way that won't false-pass: you
can't add a feature providing conditional checking without hitting this
problem. However, I have gone through the current test suite and of the
1734 tests that invoke clang with "-verify" and have some sort of expected-*
test, only 1 has an expected-* test inside a #if-style block (and this was
one that I had already picked up), so I think the idiom is not too common.
>>> The PCH changes are unfortunate. Is there any way we can get
>>> this to work properly for those cases? The relevant lines *are*
>>> included in the TU (through an included file or PCH).
>>
>> But my understanding is that the generated PCH doesn't include
>> the comments, so they can't be extracted from there. Someone
>> will be able to correct me here, or may have some other ideas
>> how to get around it.
>
> Yes, I suspect this may not be possible. One alternative would be
> to introduce another -verify mode which looks at comments within
> #if 0'd out blocks.
My first version of the verify patch did do exactly this: I had
a -verify-all mode, but reached the conclusion that especially with the
ability to attach a line number to a diagnostic, it didn't really make much
real sense to do this. Even in the PCH test-cases, the diagnostics are
still properly exposed and handled. The disadvantage in my opinion of an
alternative mode, is you must remember when (or if) to use it and I'm not
sure that this is any better?
>>> Were all the non-PCH test changes actually necessary? I would
>>> find it very worrying if we are somehow losing expected-foo
>>> checks in some cases.
>>>
>>> For instance:
>>>
>>> +// expected-error at +2 {{pasting formed '/*', an invalid preprocessing
>>> token}}
>>> #define COMM / ## *
>>> -COMM // expected-error {{pasting formed '/*', an invalid preprocessing
>>> token}}
>>> +COMM
>>>
>>> Why is this change necessary?
>>
>> Because the preprocessor was unable to "see" the comment in this
>> situation.
>
> This is exactly what I meant by "losing" expected-foo checks --
> any new markers added in such places will have no effect. For what
> it's worth, I think this is at least partially a feature of these
> changes: they're allowing us to more directly test our
> CommentHandler logic, which appears to be broken in various ways.
> Nonetheless, we should fix the preprocessor rather than working
> around it in the tests.
I do agree.
However, lets just quickly consider the changes that have been made to the
existing test-cases. They split into three main groups:
1. PCH tests
This group forms the majority: and in my opinion the changes do not affect
the validity or the purpose of these tests, nor do they weaken the tests
(for the reasons given below).
2. Tests involving #warning/#error
This covers a couple of tests, and actually, if I may be bold enough to say
so, are better tested now than before, since (as mentioned in my first post
on this), the following is a pass in the current implementation of -verify:
#error XXX // expected-error{{garbage!}}
The one case where this is not true, is that mentioned above, where
#error/#warning appears within an #if-style block: but the test-case in
question is revised to compensate for this.
3. Tests involving some sort of preprocessor error
There are only three of these, and as you say these are probably exposing
bugs then in the preprocessor / CommentHandler interface. To answer your
question below about how many tests might become XFAIL, it would be these
three (if any -- see my comments below in relation to that question).
I think the important thing to remember, and it applies to all three groups
above, is that clang still emits the diagnostics. If, for whatever reason,
the diagnostic check comments are not properly extracted, then the compiler
will emit an error due to unexpected diagnostics. The only case, as far as
I know, where this is not the case is your cruch-case above, i.e. where the
expected-* check is inside an #if-block and the diagnostic is also only
generated inside the #if-block.
However, I would argue that this is precisely what one would want and
expect, since code inside the #if-block will also only produce diagnostics
*if* the code is included. If we consider your cruch-case:
#if !__has_feature(X)
#error doesn't have X
#endif
What is this trying to achieve? My guess is that, in most cases, one simply
wants to stop the test-case if feature X is not available, since it would
probably mean the test cannot run in any case. In this situation, the
test-case doesn't need or want an expected-error to be included.
The alternative is to simply put the expected-error *outside* the #if block
to cause a test-case error if feature X *does* exist (in which case, why not
just invert the test?) or *inside* the #if block to effectively ignore the
#error (in which case why have it?!).
I'm afraid I honestly can't think of many real-world uses for such a
construction in a test-case. :o)
>> Alternatively, we could mark these tests as XFAIL and then, as a
>> separate thing, look at fixing the preprocessor, if that is what
>> needs doing.
>>
>> What's the best approach, from your point of view?
>
> The @line part seems to be uncontroversial -- if you can provide a
> patch for just that part, we can work to get that checked in first.
> I'm not particularly comfortable basing -verify on CommentHandler
> while it's known to have dropped-comment bugs, so I'd prefer to get
> it fixed before taking the CommentHandler change. Assuming that we're
> happy to change all the PCH tests per your patch, how many tests would
> we have to XFAIL?
If I'm honest, I'd actually rather fix the preprocessor and then seek to get
the whole patch committed as one. After all, the main purpose of the patch
was to enable the conditional testing aspect. The @line part came out as an
extension of that.
I'd like to suggest an alternative: I will add some extra test-cases that
explicitly test the new verify features, specifically to test in what way
expected-* checks are and are not exposed (using FileCheck). I would
suggest that rather than XFAILing the existing tests, since the
modifications to them do not invalidate the tests themselves, I will add
additional tests which do XFAIL at present and then can be switched to PASS
once the preprocessor is fixed (I would be happy to look into this).
Is this an acceptable proposal?
I'm also, later this evening, going to look into extending @line to
@file:line as per your suggestion.
Cheers
Andy
More information about the cfe-commits
mailing list