Many teams have a specific goal for the code coverage. For example, a typical goal is to have at least 80% code coverage. Others claim that code coverage is a meaningless number because you can have 100% code coverage with nonsense tests that have no value. So which coverage should one aim for?
Coverage is not a measure of quality
You can have 100% test coverage, and yet the tests may be worthless. A single test which simply executes every line in the production code without asserting on anything will yield a 100% coverage, but it won't detect any defects. Clearly, coverage can't be used to measure test quality. So what is it good for?
Coverage is an alarm bell
Although a 100% coverage does not provide much information, the opposite case does. If you have a test coverage of 20%, it means that almost none of the production code is tested. Sound the alarm and do something!
Hence, code coverage should be used as an indication that something is wrong and not as a measure of test quality.
So which coverage should we aim for?
Personally, I rarely measure code coverage. If you do proper TDD (write test first and never touch production code before you have a failing test), your coverage will naturally be close to 100% because you already test for all the desired behavior. Some test runners decorate the source code as you type to indicate whether it's covered by tests, so untested code will be screaming at you.
If a team wants to run periodical code coverage analysis in order to enable the lack-of-tests alarm bell, I would aim for close to 100% on testable code and 0% on untestable code. Testable and untestable code should ideally be separated into different modules of the application, but that's another story which deserves a separate blog post at some point in the future...
Code coverage is nonsense unless you have a much better complexity analyzer to find out the real complex methods to test. Currently most of the complexity analysis tools are very stupid.
ReplyDelete