www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.bugs - [Issue 14856] New: -cov should not count unittest blocks

https://issues.dlang.org/show_bug.cgi?id=14856

          Issue ID: 14856
           Summary: -cov should not count unittest blocks
           Product: D
           Version: D2
          Hardware: All
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P1
         Component: dmd
          Assignee: nobody puremagic.com
          Reporter: issues.dlang jmdavisProg.com

I propose that unittest blocks are not considered in the calculations for code
coverage. They can still show the count next to the lines - I don't care about
that - but the fact that they're taken into account when calculating the
coverage percentage is a problem for some unit tests - particularly those which
try and be helpful and print out extra information on failure. Take this code
from a test in std.datetime, for example:

void main()
{
}

unittest
{
    import std.datetime, std.format, std.stdio, std.typetuple;
    foreach(ct; TypeTuple!(ClockType.coarse, ClockType.precise,
ClockType.second))
    {
        scope(failure) writefln("ClockType.%s", ct);
        auto value1 = Clock.currTime!ct;
        auto value2 = Clock.currTime!ct(UTC());
        assert(value1 <= value2, format("%s %s", value1, value2));
        assert(abs(value1 - value2) <= seconds(2));
    }
}

Its coverage results look like this:

       |void main()
       |{
       |}
       |
       |unittest
       |{
       |    import std.datetime, std.format, std.stdio, std.typetuple;
      3|    foreach(ct; TypeTuple!(ClockType.coarse, ClockType.precise,
ClockType.second))
       |    {
0000000|        scope(failure) writefln("ClockType.%s", ct);
      3|        auto value1 = Clock.currTime!ct;
      3|        auto value2 = Clock.currTime!ct(UTC());
      3|        assert(value1 <= value2, format("%s %s", value1, value2));
      3|        assert(abs(value1 - value2) <= seconds(2));
       |    }
       |}
q.d is 83% covered

The scope(failure) line never ran, and it shouldn't have, because it's there
purely so that better information can be printed when a failure does occur. So,
the fact that the unit test tries to make debugging tests easier actually
prevents the code from achieving 100% coverage, even if the actual code is 100%
covered.

Another, more critical example, would be any test that involves random numbers
or anything else which is not deterministic. A test which uses a random number
generator should print out the seed for the random number generator when the
test fails so that the test results can be repeated. So, while in the code
example above, you could argue that a developer should just add the print
statements when debugging the problem rather than leave them in and affect the
code coverage, there _are_ cases where you simply can't do that. The print
statements need to be there all the time. So, you're stuck either printing all
the time (even when the tests pass) or using scope(failure) like above so that
the print lines only run when the tests fail, in which case, those lines count
as 0 towards the code coverage, and you fail to hit 100% coverage on fully
covered code.

And considering that it's not the unit tests that we really are trying to cover
- but rather the code that the tests are testing - I really don't see a problem
with not counting the unittest blocks towards code coverage.

And it's stuff like this which prevents code which is properly covered from
being listed as having 100% coverage and therefore requires that the full
coverage be manually verified, which is potentially time-consuming and is not
friendly to automating code coverage verification with tools at all. So, I
think that it would be a good move to not count the unittest blocks towards the
code coverage - be that by not putting any numbers next to those lines or by
making it so that those numbers simply don't count towards the total. Not
counting the lines would probably be better, simply because then it would be
easier to catch when unittest code wasn't be run for some reason (e.g. it was
using bad values at the edge cases), but if it's easier to just not put numbers
next to them, then I still think that that would be better than counting them
towards the total, since we really need for it to be _very_ uncommon for
properly covered code to not be able to hit 100% per -cov.

--
Aug 01 2015