[libcxx-commits] [PATCH] D111738: [libcxx][test] Add tests for std::vector comparisons

Ruslan Arutyunyan via Phabricator via libcxx-commits libcxx-commits at lists.llvm.org
Fri Oct 15 15:34:25 PDT 2021


rarutyun added a comment.

In D111738#3063968 <https://reviews.llvm.org/D111738#3063968>, @ldionne wrote:

> Thanks a lot for looking into this! I was incredibly surprised to learn that we didn't have tests for `std::vector`'s comparison. It's pretty vexing.

@ldionne Yep, no problem.

> If you have similar patches for other container types, it's probably reasonable to fold them into this one (assuming they are all pretty similar).

Actually as I mentioned in one of the comments we expect to have relatively big amount of patches that increase code coverage. We just decided to start with containers but eventually our code changes that help with code coverage are not limited to containers functionality.

Some patches for containers are similar, others are not. We tried to keep those relatively small and comfortable for review. Even if some of them would be similar for `deque`, `list`, etc. our preference is to keep those small enough and well decomposable. If you have strong preference we may try to fold them into one but I believe it likely slows down the things.



================
Comment at: libcxx/test/std/containers/sequences/vector/compare.fail.cpp:1
+//===----------------------------------------------------------------------===//
+//
----------------
Mordante wrote:
> ldionne wrote:
> > Quuxplusone wrote:
> > > @rarutyun wrote:
> > > > We have some amount of patches that increases code coverage of LLVM libc++ and we would like to contribute those
> > > 
> > > My kneejerk reaction to this PR, initially, was "I don't think this adds value," since it's mostly testing the ill-formed cases' diagnostics. But you're saying that this PR increases code coverage according to some measurable metric? Could you elaborate a bit on how you're measuring code coverage (especially for heavily templated code like this)?
> > @Quuxplusone I mean, the negative comparison test we can have a discussion about, but the positive test is definitely increasing coverage. I was surprised when I was unable to find tests for `vector`'s comparison operators, and it seems like indeed we didn't have any before this patch.
> > 
> > @rarutyun Regarding the negative tests, what Arthur is saying is that we don't normally assert the ill-formedness of things. On reason is that there's almost always an infinite number of ways a program can be ill-formed, so where do you stop? If the Standard did specify that `operator==(vector<T>, vector<U>)` **does not participate in overload resolution** if `T` and `U` are not comparable, then we'd test that. But as it is, the fact that `T` and `U` are `Cpp17EqualityComparable` is a precondition (http://eel.is/c++draft/description#structure.specifications-3.3), which means that it's technically UB to perform the comparisons you do below.
> > 
> > I think my preferred approach here would be to drop the negative tests but definitely keep the positive ones.
> @rarutyun I'm also interested to learn more about how you measured the coverage. The coverage available at http://lab.llvm.org:8080/coverage/coverage-reports/index.html misses a lot of files and I'm not convinced the values it shows are correct. So I would be very interested to get the correct values.
> Could you elaborate a bit on how you're measuring code coverage (especially for heavily templated code like this)?

@Quuxplusone Sure. We used BullsEye tool. For the template code it tracks that template was instantiated at least once (with any type) and called. If, of course, you have partial template specialization for the class and this specialization has its own functions all those won't be covered unless you instantiate the class to match on that partial specialization and call the method you want to cover.
Then, this tool as usual makes sure that all branches within the function are covered. Nothing special IIRC.

>The coverage available at http://lab.llvm.org:8080/coverage/coverage-reports/index.html 

@Mordante What I see in the report above (if I looked correctly) it does not cover `include/` folder, only `src/` one. Thus, I believe many things might be missed. We kind of build the tests and then just say that we need `include/` folder to be covered. This is very high level view of how we do it. I don't have particular technical details on my hand right now. 

>So I would be very interested to get the correct values.

We believe in those numbers we got but this tool has drawback, though (as well as all tools we tried). It cannot instrument `constexpr` functions because it injects some special code within the body of the function that does not fulfill `constexpr` requirements. As the result BullsEye just ignores such functions. 

We have got an idea to remove `constexpr` with some script on the fly just to collect code coverage metrics but as you may guess it doesn't work so easy because `constexpr` functions could be called in a context where constant expression is required (e.g. template instantiation of non-type template parameter) . With such approach applied the code just fails to compile in some circumstances.



================
Comment at: libcxx/test/std/containers/sequences/vector/compare.fail.cpp:1
+//===----------------------------------------------------------------------===//
+//
----------------
rarutyun wrote:
> Mordante wrote:
> > ldionne wrote:
> > > Quuxplusone wrote:
> > > > @rarutyun wrote:
> > > > > We have some amount of patches that increases code coverage of LLVM libc++ and we would like to contribute those
> > > > 
> > > > My kneejerk reaction to this PR, initially, was "I don't think this adds value," since it's mostly testing the ill-formed cases' diagnostics. But you're saying that this PR increases code coverage according to some measurable metric? Could you elaborate a bit on how you're measuring code coverage (especially for heavily templated code like this)?
> > > @Quuxplusone I mean, the negative comparison test we can have a discussion about, but the positive test is definitely increasing coverage. I was surprised when I was unable to find tests for `vector`'s comparison operators, and it seems like indeed we didn't have any before this patch.
> > > 
> > > @rarutyun Regarding the negative tests, what Arthur is saying is that we don't normally assert the ill-formedness of things. On reason is that there's almost always an infinite number of ways a program can be ill-formed, so where do you stop? If the Standard did specify that `operator==(vector<T>, vector<U>)` **does not participate in overload resolution** if `T` and `U` are not comparable, then we'd test that. But as it is, the fact that `T` and `U` are `Cpp17EqualityComparable` is a precondition (http://eel.is/c++draft/description#structure.specifications-3.3), which means that it's technically UB to perform the comparisons you do below.
> > > 
> > > I think my preferred approach here would be to drop the negative tests but definitely keep the positive ones.
> > @rarutyun I'm also interested to learn more about how you measured the coverage. The coverage available at http://lab.llvm.org:8080/coverage/coverage-reports/index.html misses a lot of files and I'm not convinced the values it shows are correct. So I would be very interested to get the correct values.
> > Could you elaborate a bit on how you're measuring code coverage (especially for heavily templated code like this)?
> 
> @Quuxplusone Sure. We used BullsEye tool. For the template code it tracks that template was instantiated at least once (with any type) and called. If, of course, you have partial template specialization for the class and this specialization has its own functions all those won't be covered unless you instantiate the class to match on that partial specialization and call the method you want to cover.
> Then, this tool as usual makes sure that all branches within the function are covered. Nothing special IIRC.
> 
> >The coverage available at http://lab.llvm.org:8080/coverage/coverage-reports/index.html 
> 
> @Mordante What I see in the report above (if I looked correctly) it does not cover `include/` folder, only `src/` one. Thus, I believe many things might be missed. We kind of build the tests and then just say that we need `include/` folder to be covered. This is very high level view of how we do it. I don't have particular technical details on my hand right now. 
> 
> >So I would be very interested to get the correct values.
> 
> We believe in those numbers we got but this tool has drawback, though (as well as all tools we tried). It cannot instrument `constexpr` functions because it injects some special code within the body of the function that does not fulfill `constexpr` requirements. As the result BullsEye just ignores such functions. 
> 
> We have got an idea to remove `constexpr` with some script on the fly just to collect code coverage metrics but as you may guess it doesn't work so easy because `constexpr` functions could be called in a context where constant expression is required (e.g. template instantiation of non-type template parameter) . With such approach applied the code just fails to compile in some circumstances.
> 
> I think my preferred approach here would be to drop the negative tests but definitely keep the positive ones.

Ok I see you point. Basically you are saying that negative tests are good for //Constaints// but do not bring much value for preconditions //Preconditions//. That makes perfect sense to me. Now I better understand the criteria.

For further patches we would keep it in mind but if something that tests //Preconditions// remains please let us know.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D111738/new/

https://reviews.llvm.org/D111738



More information about the libcxx-commits mailing list