Monday, January 27, 2014

NCrunch - my favourite test runner

What is your favourite test runner? I have been using many different test runners over the years. Once in a while I find a better one and switch to that. Like I mentioned in my previous blog post Pillars of Unit Tests, one of the most important things when establishing a test environment is the ease of use. Running the tests should not be a hurdle.

NCrunch

I stumbled upon NCrunch some time ago, a .Net test runner for Visual Studio. This is represents a whole new paradigm when it comes to running tests. Actually the developer does not need to run the tests anymore -- NCrunch does it for you while typing. You don't even need to save the file!

While you edit the code, NCrunch uses coloured dots to the left of the code to indicate the test status of that particular line.

Hopefully your code usually looks like this, with happy green dots covering everything. That means your code is covered by passing tests. Production code is to the left and test code to the right:

All systems operational
Let's try to introduce a bug... just for the fun of it. Red dots appear at all lines in the production code that are covered by failing tests. Note the red x on one of the lines in the test. This is the offending line that makes the test fail.



What if we comment out some tests so that parts of the production code is no longer covered by tests? NCrunch marks the non-covered lines with black dots... and all bets are off!




Pretty impressing, and quite useful. You'll find it at http://www.ncrunch.net/.

By the way, I am not affiliated with the authors of NCrunch in any way. I'm just a happy user.

Saturday, January 25, 2014

Pillars of unit tests

What is a good test? It may seem like a trivial question; a test should flash red if something is wrong and be green otherwise. But how do we achieve that? There are some pillars, or golden rules, that should always be honoured:

Trustworthyness

Simple. If you don't trust the tests, they are not worth anything. You should be able to trust that tests 
    A) fail when they should
    B) don't fail when they shouldn't.

If a tests all of a sudden turns red and the response is "well, that test fails once in a while. It's normal", the test is not trustworthy.

What can we do to make tests more trustworthy?
  • Don't rely on things that can change in each test execution. Don't use timers or random numbers in tests. If this randomness causes failing tests from time to time, developers stop trusting them. A test that fails from time to time should be seen as an indication of an intermittent problem in the system under test, not a problem in the test.
  • Don't make assumptions on or introduce dependencies to the environment. If the tests depend on the file system, the graphics card or the phase of the moon, the tests will become fragile and fail for the wrong reason.
  • Don't overspecify the test. Have a clear vision of what the test is verifying. Don't add a bunch of asserts unless you have a very specific reason for adding them. Usually, each test should contain only one assert. Tests that are overspecified will often fail for reasons that are not related to the test itself. What happens then? Developers will stop trusting the tests!
Consider this test, which verifies that the tool class StringTools is creating the reverse of a string: 

    [Test]
    public void ReverseString_CalledWithString_ReturnsStringInReverse()
    {
      // Arrange
      string input = "abc";

      // Act
      string output = StringTools.ReverseString(input);

      // Assert
      Assert.AreEqual("cba", output);
      Assert.AreEqual(3, output.Length); // This is overspecification
    }

After verifying that the reversed string is returned, the length is also asserted upon. That's overspecification! The first assert is already doing a perfectly valid and sufficient verification. The second assert only makes the test more fragile and less maintainable because we can't change the length of the test string without also changing that assert. It adds complexity without yielding any value.

Maintainability

One of the most common pitfalls when doing TDD is test maintenance. As a system grows, it's easy to forget about the tests. As specifications change, it becomes hard to adapt the tests to the new requirements. Treat your test code with the same care as your production code!

Whenever you have finished a test, consider refactoring it. Consider whether the testee class becomes hard to test. Perhaps it's time to refactor both the test class and the testee class.

If each of the tests for a specific test class is doing multiple lines of setup, it might be a good idea to refactor this into helper methods.

As an example, consider these tests, where several methods in the "TesteeClass" class is being tested:

    [Test]
    public void SomeMethod_CalledWithNull_ReturnsFalse()
    {
      // Arrange
      var testeeObject = new TesteeClass();
      testeeObject.Initialize();
      testeeObject.SomeProperty = new SomeOtherClass();

      // Act
      bool result = testeeObject.SomeMethod();

      // Assert
      Assert.IsFalse(result);
    }

    [Test]
    public void SomeOtherMethod_Called_ReturnsTrue()
    {
      // Arrange
      var testeeObject = new TesteeClass();
      testeeObject.Initialize();
      testeeObject.SomeProperty = new SomeOtherClass();

      // Act
      bool result = testeeObject.SomeOtherMethod();

      // Assert
      Assert.IsTrue(result);
    }

You enhance readability and maintainability with some refactoring:

    [Test]
    public void SomeMethod_CalledWithNull_ReturnsFalse()
    {
      // Arrange
      var testeeObject = this.MakeTesteeObject();

      // Act
      bool result = testeeObject.SomeMethod();

      // Assert
      Assert.IsFalse(result);
    }

    [Test]
    public void SomeOtherMethod_Called_ReturnsTrue()
    {
      // Arrange
      var testeeObject = this.MakeTesteeObject();

      // Act
      bool result = testeeObject.SomeOtherMethod();

      // Assert
      Assert.IsTrue(result);
    }

    private TesteeClass MakeTesteeObject()
    {
      var testeeObject = new TesteeClass();
      testeeObject.Initialize();
      testeeObject.SomeProperty = new SomeOtherClass();

      return testeeObject;
    }

Readability

Your tests need to be readable. They are your code-level functional requirement document. If a test is hard to read, it's hard to maintain. It is also hard to figure out why it fails.

Pretend like an axe murderer is going to read your tests. He knows where you live, and he gets really mad if he can't understand your tests. You don't want to make him mad.

  • Avoid using loops and logic. It should be obvious for the reader what the test does. If a test contains loops, if's, logic and other constructs, the reader needs to think in order to understand the test.
Production code vs test code.
  • Use the smallest possible dataset. If you develop an algorithm, use the smallest possible dataset needed to verify the algorithm. This will make it easier to read the test, and execution will be faster.
  • Avoid using magic numbers. If a test contains cryptic numbers, the axe murderer will wonder whether the number has a meaning. Use the lowest possible number so that it's obvious that the number is just an arbitrary input number.
Consider these two tests, one using magic numbers and one using the lowest possible number:

    [Test]
    public void AddNumbers_CalledWithTwoNumbers_ReturnsSum()
    {
      // Arrange
      double number1 = 54254; // Does this number have a meaning??
      double number2 = 64333;

      // Act
      double sum = Calculator.AddNumbers(number1, number2);

      // Assert
      Assert.AreEqual(118587, sum);
    }

    [Test]
    public void AddNumbers_CalledWithTwoNumbers_ReturnsSum()
    {
      // Arrange
      double number1 = 1; // It's obvious -- it's just an arbitray number
      double number2 = 2;

      // Act
      double sum = Calculator.AddNumbers(number1, number2);

      // Assert
      Assert.AreEqual(3, sum); // It's easier to understand the expected result
    }

Ease of use

Although this item does not pertain to the tests themselves, it is equally important. It should be easy to run tests. They should run fast. Once it becomes a hurdle to run the tests, developers will stop running then. It also becomes hard to get into the smooth test-driven flow where you develop test and production code in parallel.

Make sure that your tests run fast, and choose a test runner that allows you to run tests easily. Visual Studio has a decent test runner if you code .Net. If you use ReSharper, you have an even better test runner.


Happy TDD'ing!

Thursday, January 23, 2014

Why do TDD?

So what is test-driven development all about? Why do TDD?

The short version is: TDD is the practice of writing tests before, or in parallel with, the production code. The developer will know immediately whether the code is working as expected without starting the application. Moreover, the tests are run automatically, developers will be alerted immediately if they do changes to the code that introduce defects.

Numerous articles are written about the benefits of TDD. To summarize, TDD gives a higher code quality if it's done right.

Benefits of TDD from a technical point of view

So what's in it for us developers? Let's have a look on some of the benefits of doing TDD.

Bugs are caught earlier

Has it ever happened to you that you have developed something, and someone else (or you) introduces bugs a year later? Perhaps some new functionality was added, or some optimizations were done, and all of a sudden your favourite algorithm failed?

Was it hard to figure out when this bug was introduced and hence hard to fix it? Perhaps the bug even found its way to the customer?

Well, you are not alone! That happens to us all. The good news are; the probability of discovering the bug immediately is much higher if the existing code is covered by tests!

You do rather want to let a test find the defect... ...than having an angry customer find it

Refactor with confidence

Refactor often, they say. Frequent refactoring yields a continuous improvement of architecture as the codebase increases. So how do you know that the refactoring does not introduce defects? By having automated tests, of course! If you can refactor with a lower likelyhood of creating defects, you can refactor more often.

Better architecture

Now that's a bold claim. How can tests enhance the architecture? Because doing TDD with a poor architecture is painful. Good tests and a proper TDD approach requires that the production code follows the SOLID principles. You may not have heard about the SOLID principles, but you do most likely use them. These principles include commonly accepted best practices like the Single Responsibility Principle (let your class do one thing only), decoupling and dependency injection. 

If you write the test first, you are forced to follow these principles. If it turns out that a class is not testable, it usually means that you are violating one or more of these principles. Then it's a good idea to do refactoring, or perhaps the class should be redesigned altogether.

Bottom line is - TDD is an efficient design tool because it encourages a clean and decoupled design. Complex code is hard to test and is a bad idea to begin with!

Where to start

So how do you get started with test-driven development? That's a too large topic to cover in this blog post, but I highly recommend getting a book. You will save yourself from countless hours of frustration if you get a fundamental understanding of writing tests instead of reading random articles on the web.

The Art of Unit Testing by Roy Osherove is a very good book. Make sure that you get the 2nd edition, as this is more updated on tools and methodology.

One last word: remember that test-driven development is not magic. It will not magically solve all your problems and make your codebase bug free tomorrow. It's a tool that, if used correctly, will help you create software with fewer defects, higher quality and better architecture.

TDD is software development done the scientific way. Good luck!


Welcome to TDD Addict

Welcome to by blog about test-driven development (TDD)!

I am a software engineer in Blueback Reservoir, a company specializing in providing consulting services and software solutions for the global oil & gas exploration and production industry.

One of my favourite topics within software development is automated software testing. Every so often we all stumble upon various challenges related to unit testing, and I will publish random thoughts on the topic here. I hope that I can help others with improving their TDD skills by sharing my experiences with you.

Happy TDD'ing!