Why Code Coverage Alone Doesn’t Mean Squat

Agile software development is all the rage these days. One of Agile’s cornerstones is the concept of test driven development (TDD). With TDD, you write the test first, and write only enough code to make the test pass. You then repeat this process until all functionality has been implemented, and all tests pass. TDD leads to more modular, more flexible, and better designed code. It also gives you, by the mere nature of the process, a unit test suite that executes 100% of the code. This can be a very nice thing to have.

However, like most things in life, people often focus on the destination, and pay little attention to the journey required to get there. We as human beings are always looking for short cuts. Some software managers see 100% code coverage as a must have, not really caring how that goal is achieved. But it is the journey to 100% code coverage that provides the benefits that most associate with simply having 100% code coverage. Without taking the correct roads, you can easily create a unit test suite that exercises 100% of your code base, and still end up with a buggy, brittle, and poorly designed code base.

100% code coverage does not mean that your code is bug free. It doesn’t even mean that your code is being properly tested. Let me walk through a very simple example.

I’ve created a class, MathHelper that I want to test. MathHelper has one method, average, that takes a List of Integers.

/**
 * Helper for some simple math operations.
 */
public class MathHelper {

    /**
     * Average a list of integers.
     * 
     * @param integerList The list of integers to average.
     * @return The average of the integers.
     */
    public float average(List<Integer> integerList) {
        ...
    }
}

Caving into managerial pressure to get 100% code coverage, we quickly whip up a test for this class. Abracadabra, and poof! 100% code coverage!

coverage-green-bar

So, we’re done. 100% code coverage means our class is adequately tested and bug free. Right? Wrong!

Let’s take a look the the test suite we put together to reach that goal of 100% code coverage.

public class MathHelperTest {

    private MathHelper _testMe;

    @Before
    public void setup() {
        _testMe = new MathHelper();
    }

    @Test
    public void poor_example_of_a_test() {
        List<Integer> nums = Arrays.asList(2, 4, 6, 8);
        _testMe.average(nums);
    }
}

Ugh! What are we really testing here? Not much at all. poor_example_of_a_test is simply verifying that the call to average doesn’t throw an exception. That’s not much of a test at all. Now, this may seem like a contrived example, but I assure you it is not. I have seen several tests like this testing production code, and I assume that you probably have too.

So, let’s fix this test by actually adding a test!

    @Test
    public void a_better_example_of_a_test() {
        List<Integer> nums = Arrays.asList(2, 4, 6, 8);
        _testMe.average(nums);
        assertEquals(5.0, result);
    }

Let’s run it, and see what we get.

java.lang.AssertionError: expected:<5.0> but was:<2.0>

Well, that’s certainly not good. How could the average of 2, 4, 6, and 8 be 2? Let’s take a look at the method under test.

    public float average(List<Integer> integerList) {
        long sum = 0;
        for (int i = 0; i < integerList.size() - 1; i++) {
            sum += integerList.get(i);
        }
        return sum / integerList.size() - 1;
    }

Ok, there’s the bug. We’re not iterating over the full list of integers that we have been passed. Let’s fix it.

    public float average(List<Integer> integerList) {
        long sum = 0;
        for (Integer i : integerList) {
            sum += i;
        }
        return sum / integerList.size();
    }

We run the test once again, and very that our test now passes. That’s better. But, let’s take a step back for a second. We had a method with unit tests exercising 100% of the code that still contained this very critical, very basic error.

With this bug now fixed, we commit the code to source control, and push a patch to production. All is fine and dandy until we start getting hammered with bug reports describing NullPointerExceptions and ArithmeticExceptions being thrown from our method. Taking another look at the code above, we realize that we have not done any validation of the input parameter to our method. If the integerList is null, the for loop will throw a NullPointerException when it tries to iterate over the list. If the integerList is an empty list, we will end up trying to divide by 0, giving us an ArithmeticException.

First, let’s write some tests that expose these problems. The average method should probably throw an IllegalArgumentException if the argument is invalid, so let’s write our tests to expect that.

    @Test(expected=IllegalArgumentException.class)
    public void test_average_with_an_empty_list() {
        _testMe.average(new ArrayList<Integer>());
    }
    
    @Test(expected=IllegalArgumentException.class)
    public void test_average_with_a_null_list() {
        _testMe.average(null);
    }

We first verify that the new tests fail with the expected NullPointerException and ArithmeticException. Now, let’s fix the method.

    public float average(List<Integer> integerList) 
            throws IllegalArgumentException {
        
        if (integerList == null || integerList.isEmpty()) {
            throw new IllegalArgumentException(
                "integerList must contain at least one integer");
        }
        
        long sum = 0;
        for (Integer i : integerList) {
            sum += i;
        }
        return sum / integerList.size();
    }

We run the tests again, and verify everything now passes. So, there wasn’t just one bug that slipped in, but three! And, all in code that had 100% code coverage!

As I said in the beginning of the post, having a test suite that exercises 100% of your code can be a very valuable thing. If achieved using TDD, you will see many or all of the benefits I list at the top of the post. Having a solid test suite also shields you from introducing regressions into your code base, letting you find and fix bugs earlier in the development cycle. However, it is very important to remember that the goal is not to have 100% coverage, but to have complete and comprehensive unit tests. If your tests aren’t really testing anything, or the right thing, then they become virtually useless.

Code coverage tools are great for highlighting areas of your code that you are neglecting in your unit testing. However, they should not be used to determine when you are done writing unit tests for your code.

Be Sociable, Share!

    4 thoughts on “Why Code Coverage Alone Doesn’t Mean Squat

    1. I disagree, I think that even bad code coverage is much better than no code coverage. Unit Tests should go to code review and that way you will catch the bad and useless tests. I think that first time you try to get to 100% some of your unit tests will be bad, but thats why you need review. Its freaking hard to write so many unit tests at once and thats why quality drops with every new test you write.

    2. Raveman, thanks for the comment.

      I’m not arguing for no code coverage. The point I was trying to make is that I’ve seen people focus on obtaining 100% code coverage as their main objective. I think this is wrong, and that writing good unit tests should be the main objective. If simply obtaining 100% code coverage is your goal, than you can reach that goal very easily with poor tests, which doesn’t get you very far. One of the things I like about test driven development is that you never have to “try to get to 100%”. Since you write the tests before you write the code, you will always be at 100%. So, the only thing to worry about is writing good code, and good tests.

    3. Funny, I was thinking about this the other day at work. I would definitely take quality tests over total testing coverage any day, especially over bad test coverage because that only gives you a false sense of confidence. At least if something breaks because and there is no test you know you should have tested it and can add a test. If “tested” code breaks and your tests that cover that code are bad then it can lower your confidence in other existing tests. That said, I too am guilty of skipping tests or putting them off until later, sigh, I wish testing was as fun as coding :)

    4. Hey Matt, thanks for the comment. I agree with you 100%….except for your comment on testing :) Testing can be fun. Have you considered writing tests in Groovy or JRuby to spice things up? I added ant targets to build and run Groovy tests in slapp. Check it out if you’re interested.

    Leave a Reply

    Your email address will not be published. Required fields are marked *