Wednesday, October 15, 2014

Doing it wrong

Test-driven development is great and all that... if you do it right. My blog posts so far have been focusing on how to do it right. However, it's important to be aware of the pitfalls so that they can be avoided.

Many teams that try to TDD fail at some point. What are the most common mistakes when trying to introduce test-driven development?

Not realizing the paradigm shift

Doing test-driven development is not just about "writing unit tests". It's a whole new way to do software development. Therefore, it's important that the team realizes that it takes some time to get up to speed on TDD.

This is like walking across a ravine from one hilltop to the next. Your team is probably well-established and has reached a productivity peak. In order to reach the next peak, you need to realize that your productivity will suffer for a limited period while you adapt to a new development paradigm:

Crossing the ravine
If the team, their manager or other stakeholders don't realize this, you/they may get impatient and decide to abandon the whole TDD idea. What a misery!

No buy-in from management

This goes hand in hand with the previous section. Switching to TDD is an investment. Like any other investment, it comes with a cost. The management needs to realize that there will be an initial cost in terms of training and temporarily reduced productivity while crossing the ravine.

The management also needs to realize that this is an investment that pays off. After the initial cost, the benefit is obvious: software with fewer bugs means higher quality, a shorter beta testing phase and happier customers. It's not always easy to quantify this benefit into a language that managers and shareholders understand (the $/£/€/kr language), though.

Developers decide up front that TDD is a bad idea

This is perhaps one of the toughest obstacles. If the developer team is reluctant to do TDD in the first place, it's hard to enforce it. As a team lead, one of the most important tasks will be to motivate the team to do TDD. A TDD introductory course is highly recommended, as it's hard to learn TDD on your own without any mentoring.

It's important that a critical mass within the team does proper TDD. If several developers completely ignore the fact that there are tests that need to be maintained, they will quickly ruin the entire TDD process for the others.

No focus on maintainability

I mentioned maintainability in my blog post "Pillars of unit tests". You should treat your test code as well as your production code. Your test code is not a second class citizen. Test code should be reviewed and refactored as often and as carefully as the production code.

If the test code turns into spaghetti, maintainability will suffer. As you add or change functionality in the production code, it will become harder and harder to do the required changes to the test code. The team may give up testing new use cases, tests will fail for the wrong reason and the team gives up doing TDD.

Too hard to run tests or tests are not trusted

It should be easy to run tests. Ideally, the team should use a test runner like NCrunch so that the developers don't need to run the tests manually.

If it's hard to run tests, or the tests run very slowly, developers tend to skip it. They may check in broken code into the source repository because they don't realize that the tests are failing. All hope is lost.

Also, it doesn't help to run the tests if you don't trust them. If the team does not have a good habit of rejecting code that causes tests to fail, the team will stop trusting the tests. If the tests are not trustworthy, much of the benefit with TDD is lost.

Thursday, August 7, 2014

Productivity tips: live templates in Visual Studio and ReSharper

In my blog post Two readability tips for naming and organizing tests, I shared some tips for arranging tests. The tests are typically named and organized like this:


[Test]
public void ItemUnderTest_Scenario_ExpectedBehaviour()
{
  // Arrange
  Arrange the test here...

  // Act
  Do the action which is being tested here...

  // Assert
  Do assertions here...
}

While the naming and arrangement scheme is simple enough, it quickly becomes boring to write this stub every time you write a new test. If you are fortunate enough to use Visual Studio with the ReSharper plug-in, however, there is a nice feature called "live templates" which can do it for you!

Live templates allow you to type a keyword, select one of the live template entries from the popup menu which automatically pops up, and ReSharper will automatically insert the template for you. In this case, the template for a test is named "test":




After the template is pasted into the test class, there is no need to navigate the cursor around in order to edit the details in the test name. Red rectangles represent the fields that you will typically want to edit (ItemUnderTest, Scenario and ExpectedBehaviour). Enter the contents of those fields, press enter after editing each field, and then you're ready to implement the test!

Create and edit the live templates in ReSharper->Templates Explorer. As you can see, there are already many useful built-in templates:




Notice how you define the fields that the user typically edit after inserting the template: $ItemUnderTest$, $Scenario$ and $ExpectedBehaviour$. After the user has entered the contents of those fields, the cursor is placed at the optional $END$ keyword.


Thursday, June 26, 2014

Help, my class is turning into an untestable monster!

SO, let's say that you have a nicely designed and unit tested class. Perhaps it's a MVVM dialog model which takes care of commands and displaying stuff. In the beginning, this class has a single list of selectable objects, a button which adds a new object and a button which removes objects:


Testing this is simple, isn't it? Create a test which verifies that the list has the relevant objects. Write another test which adds a new object and a third test which removes the selected object. Life is easy!

Now your client is so happy with the dialog, he has some additional requests. He wants to add some complex filtering so that only objects that meet certain criteria are visible. He also wants the list to be shown in a hierarchical tree instead of a flat list. Of course he also wants to define how to group them. Oh, and, based on the user's privileges, the remove functionality may or may not be enabled:


Now we have thrown a lot more logic into the dialog. There are multiple cases which yield a growing number of permutations -- how can we make sure that we cover all of those scenarios in tests? The user may or may not filter, he may or may not group into categories, etc. Your class is starting to get hard to test. It's turning into an untestable monster.

You will see this in your code. You will see it a lot. Well, it's a sign!


It's a sign

Testable code which turns into untestable code usually means that the code has a more fundamental problem. As a class grows, it is getting more and more responsibilities. More responsibilities means more scenarios to cover, and more complexity. The class is starting to violate The Single Responsibility Principle.

Regardless of whether we do TDD or not, this is a bad thing. It's time to divide and conquer -- split your complex multi-responsibility class into smaller, simpler single-responsibility classes. Those classes will be easier to test, and you will avoid all the complex test scenario permutations.

Divide and conquer

In our view model example, the view model has at least these responsibilities:
  • Do filtering of items based on some criteria
  • Organize the items into a hierarchy based on user selection
  • Present the hierarchy as a tree view
  • Allow user to select items
  • Add and remove items
  • Disable features based on user privileges
Obviously, this class is violating the Single Responsibility Principle! Hence, the root of the problem is not the testability as such -- it's the class itself. The increasing complexity of the tests has revealed that the production code is a candidate for refactoring.

In this case, I would split the class up into a filter class, a hierarchy organizer class, user interaction handler, etc. This is a good idea anyway from a software quality perspective.

Conclusion

As classes grow with an increasing number of responsibilites, testing becomes harder. Don't get tempted to skip the tests. Instead, refactor the production code class so that it becomes testable AND well-designed.

TDD is not only a first line of defense against new defects. It's also a very efficient design tool which forces you to keep your classes tidy and adhere to the SOLID principles.

Poor testability is a sign. Embrace the sign. Refactor your code today!

Wednesday, June 11, 2014

Automated testing of rendering code

In my blog post "TDD and 3D visualization", I wrote about a somewhat complex scenario: how to do TDD on 3D visualization. That blog post focused on visualization using high-level toolkits like Open Inventor or VTK. In that case, we don't test the rendered result. Instead, we test that the state of the scene graph is correct, and we trust the 3D toolkit to do the rendering correctly for us.

What if we write the rendering code ourselves? This may be the case if we write our own ray tracer or GPU shaders. In that case, we actually need to verify that the rendered pixels are correct.

Bitmap-based reference tests

TDD-style test-first development is not easy to do in this case, and it may not even be possible. The test result itself is much more complex than a trivial number or string result. How can we write a test that validates a complex image without having the rendered image up front?

In this case, it may be more convenient to write the rendering code first and then use a rendered image as a reference image for the test. We will loose some of the benefits of doing proper TDD, but those tests will still act as regression tests that verify that future enhancements do not introduce defects.

The production code and tests will thus be written like this:

  1. Write production code that renders to a bitmap
  2. Verify the reference image manually
  3. Write a test that compares future rendering results with this reference image
  4. Fail the test if they differ

Allow for slight differences

This may sound fairly trivial, but reference image-based tests have one major challenge: the rendered images may differ slightly because of different graphics cards and different graphics card drivers:


For example, different drivers may apply shading differently so that the colors vary between driver versions. Furthermore, antialiasing may offset the image by one pixel in either direction, and the edges may be rendered differently. We certainly don't want the tests to fail because of this, because then we stop trusting the tests.

Hence, we need to allow for small differences between the rendering result and the reference image. More specifically, we need to allow for
  • Single pixel offset
  • Small differences in color or intensity
I have been using this approach with good success:
  1. Render image
  2. For each pixel in the rendered image, find the pixel in the 3x3 neighbouring pixels in the reference image with the lowest deviation in RGB pixel value. Add this deviation to a list.
  3. Create a histogram of all the pixel deviations so that you can calculate the distribution of the errors.
  4. Decide on an error threshold for acceptable differences. For example, say that
    • A maximum of 0.1% of the pixels can have a larger RGB deviation than 50
    • A maximum of 2% of the pixels can have a larger RGB deviation than 10
    • A maximum of 20% of the pixels can have a larger RGB deviation than 3
You should start with a fairly strict tolerance. If you find that you get too many false positives, increase the tolerance slightly.

By defining a deviation distribution tolerance like this, you will allow for small variations while still catching rendering defects that cause rendering errors.

Render to bitmap, not to screen

If possible, render to an offscreen buffer in the tests. This is more robust than rendering to a window and then doing a screenshot, because the tests will not be obstructed by other windows, screensavers, locked computer, etc. This might be a good idea architectural idea anyway, as it separates rendering from display.


Wednesday, May 28, 2014

Code coverage -- meaningless number or the Holy Grail?

Code coverage is a term which is often brought up when discussing TDD. Code coverage is the percentage of the lines of code that are executed by unit tests.



Many teams have a specific goal for the code coverage. For example, a typical goal is to have at least 80% code coverage. Others claim that code coverage is a meaningless number because you can have 100% code coverage with nonsense tests that have no value. So which coverage should one aim for?

Coverage is not a measure of quality

You can have 100% test coverage, and yet the tests may be worthless. A single test which simply executes every line in the production code without asserting on anything will yield a 100% coverage, but it won't detect any defects. Clearly, coverage can't be used to measure test quality. So what is it good for?

Coverage is an alarm bell

Although a 100% coverage does not provide much information, the opposite case does. If you have a test coverage of 20%, it means that almost none of the production code is tested. Sound the alarm and do something!

Hence, code coverage should be used as an indication that something is wrong and not as a measure of test quality. 

So which coverage should we aim for?

Personally, I rarely measure code coverage. If you do proper TDD (write test first and never touch production code before you have a failing test), your coverage will naturally be close to 100% because you already test for all the desired behavior. Some test runners decorate the source code as you type to indicate whether it's covered by tests, so untested code will be screaming at you.

If a team wants to run periodical code coverage analysis in order to enable the lack-of-tests alarm bell, I would aim for close to 100% on testable code and 0% on untestable code. Testable and untestable code should ideally be separated into different modules of the application, but that's another story which deserves a separate blog post at some point in the future...

Monday, May 19, 2014

Test-driven development of plug-ins

How can we do test-driven development in a plug-in environment where our production code lives inside of an application? This may be the case if we are developing plug-ins for applications like Microsoft Office, Adobe Photoshop or Schlumberger Petrel. There are two challenges with this:
  1. NUnit (or any other test runner of choice) may not be able to start the host application in order to execute the plug-ins
  2. Starting the host application may take a while. If it takes 30 seconds to start the application, it's hard to get into the efficient TDD cycle that I describe in the blog post "An efficient TDD workflow".

Abstraction and isolation

Abstraction, inversion of control and isolation are common strategies when we develop code which is dependent on the environment. The idea is to make abstractions to the environment and thereby omitting it when executing the tests. It's not always possible, though. Sometimes our plug-in interacts heavily with and is dependent upon the behavior of the environment.

So what do we do then? If we can't isolate them, join them!

Unit test runner as a plug-in

Both challenges above can be solved by creating your own test runner as a plug-in inside of the host application! Instead of letting NUnit start the host application (slow and perhaps not even possible), the host application is running the tests.

So instead of doing a slow TDD cycle like this:


We want to move the host application startup out of the cycle like this:


So how can we do this? Create a test runner inside the host application... as a plug-in!

Let's have a look on one specific case: a test runner as a plug-in in Petrel. Petrel faces challenge #2 mentioned above. It can run in a "unit testing mode" where NUnit tests can start Petrel and run the Petrel-dependent production code, but startup takes 30-60 seconds.

If you are using NUnit, this is quite simple. NUnit provides multiple layers of test runners, depending on how much you want to customize its behavior. A very rudimentary implementation which allows the user to select a test assembly, list tests and execute them could look like this:

public partial class TestRunnerControl : UserControl
  {
    private readonly Dictionary<string, List<string>> _assembliesAndTests = new Dictionary<string, List<string>>();

    private string _pluginFolder;

    public TestRunnerControl()
    {
      InitializeComponent();

      this.runButton.Image = PlayerImages.PlayForward;

      FindAndListTestAssemblies();
    }

    private void FindAndListTestAssemblies()
    {
      var pluginPath = Assembly.GetExecutingAssembly().Location;
      _pluginFolder = Path.GetDirectoryName(pluginPath);

      AppDomain tempDomain = AppDomain.CreateDomain("tmpDomain", null, new AppDomainSetup { ApplicationBase = _pluginFolder });
      tempDomain.DoCallBack(LoadAssemblies);
      AppDomain.Unload(tempDomain);

      foreach (var testDll in _assembliesAndTests.Keys)
      {
        this.testAssemblyComboBox.Items.Add(testDll);
      }
    }

    private void LoadAssemblies()
    {
      foreach (
        var dllPath
          in Directory.GetFiles(_pluginFolder)
            .Where(f => f.EndsWith(".dll", true, CultureInfo.InvariantCulture) && f.Contains("PetrelTest")))
      {
        try
        {
          Assembly assembly = Assembly.LoadFrom(dllPath);

          var dllFilename = Path.GetFileName(dllPath);

          try
          {
            var typesInAssembly = assembly.GetTypes();

            foreach (var type in typesInAssembly)
            {
              var attributes = type.GetCustomAttributes(true);

              if (attributes.Any(a => a is TestFixtureAttribute))
              {
                if (!_assembliesAndTests.ContainsKey(dllFilename))
                {
                  _assembliesAndTests[dllFilename] = new List<string>();
                }

                _assembliesAndTests[dllFilename].Add(type.FullName);
              }
            }

            Ms.MessageLog("*** Found types in " + assembly.FullName);
          }
          catch (Exception e)
          {
            Ms.MessageLog("--- Could not find types in " + assembly.FullName);
          }
        }
        catch (Exception e)
        {
          Ms.MessageLog("--Could not load  " + dllPath);
        }
      }
    }  

    private void TestAssemblySelected(object sender, EventArgs e)
    {
      this.testClassComboBox.Items.Clear();

      var testAssembly = this.testAssemblyComboBox.SelectedItem as string;
      if (!string.IsNullOrEmpty(testAssembly))
      {
        foreach (var testClass in _assembliesAndTests[testAssembly])
        {
          this.testClassComboBox.Items.Add(testClass);
        }
      }
    }

    private void runButton_Click(object sender, EventArgs e)
    {     
      if (!CoreExtensions.Host.Initialized)
      {
        CoreExtensions.Host.InitializeService();
      }

      var results = RunTests();

      ReportResults(results);
    }

    private void ReportResults(TestResult results)
    {
      var resultsToBeUnrolled = new List<TestResult>();
      resultsToBeUnrolled.Add(results);

      var resultList = new List<TestResult>();
      while (resultsToBeUnrolled.Any())
      {
        var unrollableResult = resultsToBeUnrolled.First();
        resultsToBeUnrolled.Remove(unrollableResult);

        if (unrollableResult.Results == null)
        {
          resultList.Add(unrollableResult);
        }
        else
        {
          foreach (TestResult childResult in unrollableResult.Results)
          {
            resultsToBeUnrolled.Add(childResult);
          }
        }
      }

      int successCount = resultList.Count(r => r.IsSuccess);
      int failureCount = resultList.Count(r => r.IsFailure);
      int errorCount = resultList.Count(r => r.IsError);

      string successString = string.Format("{0} tests passed. ", successCount);
      string failureString = string.Format("{0} Tests failed. ", failureCount);
      string errorString = string.Format("{0} Tests had error(s). ", errorCount);

      string summary = successString + failureString + errorString;

      this.resultSummaryTextBox.Text = summary;

      this.resultSummaryTextBox.Select(0, summary.Length);
      this.resultSummaryTextBox.SelectionColor = Color.FromArgb(80, 80, 80);

      if (successCount > 0)
      {
        this.resultSummaryTextBox.Select(0, successString.Length);
        this.resultSummaryTextBox.SelectionColor = Color.DarkGreen;
      }

      if (failureCount > 0)
      {
        this.resultSummaryTextBox.Select(successString.Length, failureString.Length);
        this.resultSummaryTextBox.SelectionColor = Color.Red;
      }

      if (errorCount > 0)
      {
        this.resultSummaryTextBox.Select(successString.Length + failureString.Length, errorString.Length);
        this.resultSummaryTextBox.SelectionColor = Color.Red;
      }

      this.resultSummaryTextBox.Select(0, summary.Length);
      this.resultSummaryTextBox.SelectionAlignment = HorizontalAlignment.Center;

      this.resultSummaryTextBox.Select(0, 0);

      // Set grid results
      this.resultsGridView.Rows.Clear();

      int firstErrorIdx = -1;

      foreach (var result in resultList)
      {
        var testName = result.Name;
        var image = result.IsSuccess ? GeneralActionImages.Ok : StatusImages.Error;

        int idx = this.resultsGridView.Rows.Add(testName, image);

        if (firstErrorIdx == -1 && !result.IsSuccess)
        {
          firstErrorIdx = idx;
        }
      }

      if (firstErrorIdx != -1)
      {
        this.resultsGridView.FirstDisplayedScrollingRowIndex = firstErrorIdx;
      }
    }

    private TestResult RunTests()
    {
      var testAssembly = this.testAssemblyComboBox.SelectedItem as string;
      var testClass = this.testClassComboBox.SelectedItem as string;

      TestPackage testPackage = new TestPackage(Path.Combine(_pluginFolder, testAssembly));
      TestExecutionContext.CurrentContext.TestPackage = testPackage;

      TestSuiteBuilder builder = new TestSuiteBuilder();
      TestSuite suite = builder.Build(testPackage);

      var testFixtures = FindTestFixtures(suite);
      var desiredTest = testFixtures.First(f => f.TestName.FullName == testClass);
      var testFilter = new NameFilter(desiredTest.TestName);
      TestResult result = suite.Run(new NullListener(), testFilter);

      return result;
    }

    private IEnumerable<TestFixture> FindTestFixtures(Test test)
    {
      var testFixtures = new List<TestFixture>();

      foreach (Test child in test.Tests)
      {
        if (child is TestFixture)
        {
          testFixtures.Add(child as TestFixture);
        }
        else
        {
          testFixtures.AddRange(FindTestFixtures(child));
        }
      }

      return testFixtures;
    }
  }

Note that this code sample is not complete, but it shows how to find and run the tests. Use the test runner control in a plug-in inside the host application, and it will look like this:


Whenever a test is failing, Visual Studio will break at the offending NUnit Assert statement:


So how does this allow for an efficient code-test cycle? By using Visual Studio's Edit & Continue! Whenever you want to edit the production code or the test code, press pause in Visual Studio, edit as needed, and run the tests again. Hence, you can write code and (re)run tests at will without restarting the host application.

It's not as efficient as writing proper unit tests that execute in milliseconds, but it's far better than waiting for the host application on every test execution.

Wednesday, May 14, 2014

An efficient TDD workflow

How do you do your everyday TDD work? There are many ways to skin a cat, but I like to do it in a cyclic fashion:


I start by writing a very simple failing test. I then implement enough of the production code to just make this test pass. I then add more details to the test which make the test fail again. I implement more of the production code so that it passes again. Rinse and repeat until your tests are covering all the use cases.

Let's say that I want to write a Matrix class with an inversion method. I'd start with a super simple test case where I assert that the inverse of the identity matrix I is an identity matrix. I'd then add more and more cases:

  1. Create test which asserts that the inverse of the unity matrix I is a unity matrix. Production code returns an empty matrix, so it will fail
  2. Fix the production code so that it always returns I. The test will pass.
  3. Add new test case which asserts that the inverse of a different 2x2 matrix is correct. The test will fail
  4. Fix the production code so that it calculates the inverse of any 2x2 matrix. The test will pass
  5. Add new test case with a 3x3 matrix. The test will fail
  6. Fix the production code so that it calculates the inverse of any size matrix. The test will pass.
  7. Add new test case which asserts that calculating the inverse of a singular matrix throws an ArithmetricException. The test will fail because it tries to divide by zero
  8. Fix the production code so that it handles singular matrices correctly. The test will pass.
...and the show goes on until I am satisfied. Note that I'm not writing the entire test and then the entire production code method. Both the test and the production code evolve in parallel -- but I always have a failing test before doing anything with the production code.

Furthermore, I like to have the production code and the test code side-by-side like this so that I don't need to switch back and forth between the files:


If you are using a good test runner like NCrunch, you don't even need to run the tests manually. They are run automatically as you type, and you will have instant feedback when the test fails or passes!


Thursday, April 24, 2014

Sounds great, but what is the cost?

My latest posts have been very developer-centric. This post should address a broader audience, including those with suits and ties. Yes, managers and CEOs, I am talking to you! ;-)

Test-driven development is not only about writing code, however. Software development is about delivering value to a customer and earning money. Profit is income minus costs. One of the most common arguments against TDD is "it takes too long time" or "it is too expensive". So what is the cost of doing test-driven development?

It's not easy to put a price tag on test-driven development. We can get an idea about the alternative (not doing TDD) by looking on the Total Cost to Market, however. This is the total cost of a change in an application from idea to customer deployment. The total cost includes

  • Developing the feature
  • Testing
  • Bug fixing
  • Cost of risk that there are defects
  • Customer deployment
The cost incurred by actually typing the code is just a fraction of the Total Cost to Market.

Total cost to market

How much does it cost to develop a given feature?


In the beginning of a development phase, this is a fairly simple calculation: it's the number of hours spent on coding multiplied by the developers' hourly rate.

As time progresses and the product grows, this price increases because we add the risk of introducing bugs that are not discovered. A bug which is introduced late in the development cycle is more costly because there is an added risk that this bug (or a new bug that is introduced by the bug fix) is never discovered.

It becomes even worse after feature freeze when beta testing has started. If a bug is discovered after 90% of the testing has finished, how do we know that the bug fix did not introduce a new bug affecting the previously tested features? Do we test everything over again? Now that's an additional cost that contributes to the exponential cost graph!

And now the nightmare begins: what if the bug is discovered by the customer instead of our testers? That adds the additional cost of lost customer satisfaction and re-deployment. We can't easily put a number on this which can be put into a budget, but we certainly want to avoid that!

Fail early

The moral is; we want to fail early. We want to discover bugs as early as possible in the development cycle. The developer should get an alarm bell as soon as he creates the defect and not when a beta tester (or even worse, the customer) discovers it.

Bugs will occur, so we want to find and fix them while the cost is low. In other words, we want the bug to contribute as little as possible to the Total Cost to Market.

Good unit tests is a good first line of defense against bugs. Surely it's not a guarantee that you will deliver bug-free software, but it's a very efficient tool to deliver software with fewer bugs. Little research is done to actually quantify the economical benefit of TDD, but this study done on a few projects in IBM and Microsoft indicates that TDD roughly gave 25% increased developer time and reduced the number of bugs to a third: http://research.microsoft.com/en-us/groups/ese/fp17288-bhat.pdf

Conclusion

Did I answer my own question about the cost of doing TDD? No, I can't provide you with the number of dollars to put into your budget spreadsheet. However, it's pretty obvious that we can reduce the Total Cost to Market by discovering and fixing bugs as early as possible. That is a task which is far too important to rely only on manual testing.

Monday, March 24, 2014

NUnit TestCase and TestCase source

How do you test different input values for the method under test? Let's say that you want to test a class that converts sentences into single camel-case words. As an example, "what does the fox say" should be converted to "WhatDoesTheFoxSay".

There are many scenarios that need to be tested. There can be one or more words in the sentence. We should also test for an empty string. This can lead to many almost identical tests:


[Test]
public void MakeCamelCase_CalledWithEmptyString_ReturnsEmptyString()
{
  // Arrange
  var input = string.empty;

  // Act
  var result = StringTools.MakeCamelCase(input);

  // Assert
  Assert.AreEqual(string.empty, result);
}

[Test]
public void MakeCamelCase_CalledWithOneWord_ReturnsThatWord()
{
  // Arrange
  var input = "ab";

  // Act
  var result = StringTools.MakeCamelCase(input);

  // Assert
  Assert.AreEqual("Ab", result);
}

[Test]
public void MakeCamelCase_CalledWithTwoWords_ReturnsCamelCase()
{
  // Arrange
  var input = "ab cd";

  // Act
  var result = StringTools.MakeCamelCase(input);

  // Assert
  Assert.AreEqual("AbCd", result);
}


...and the list goes on. So how can we avoid repeating those almost identical tests?

TestCase

In NUnit, the test attribute TestCase comes to the rescue! Simply use one single test and provide multiple test cases as inputs:


[TestCase(string.empty, string.empty)]
[TestCase("ab", "Ab")]
[TestCase("ab cd", "AbCd")]
[TestCase("ab cd ef", "AbCdEf")]
public void MakeCamelCase_CalledWithString_ReturnsCamelCase(string input, string expectedResult)
{
  // Act
  var result = StringTools.MakeCamelCase(input)

  // Assert
  Assert.AreEqual(expectedResult, result);
}


Now these three tests plus an additional test case with three words are collapsed to one single easy-to-read test.

Note that the [TestCase] attribute can take any number of parameters, including input and expected output. The test itself takes those parameters as input.

With NUnit 2.5 or newer, the input and result parameters can be made a bit more readable by using the named attribute parameter Result:


[TestCase(string.empty, Result = string.empty)]
[TestCase("ab", Result = "Ab")]
[TestCase("ab cd", Result = "AbCd")]
[TestCase("ab cd ef", Result = "AbCdEf")]
public string MakeCamelCase_CalledWithString_ReturnsCamelCase(string input)
{
  // Act
  var result = StringTools.MakeCamelCase(input)

  return result;
}

TestCaseSource

All this is great, but [TestCase] has a limitation: it can only take constant expressions as parameters. If we were to test a mathematical algorithm on an input class like a Vector, we couldn't have used [TestCase]. There is another option, though: the TestCaseSource attribute:


[Test, TestCaseSource("VectorDotProductCases")]
public void DotProduct_CalledWithAnotherVector_ReturnsDotProduct(Vector lhs, Vector rhs, double expectedResult)
{
  // Act
  var dotProduct = lhs.DotProduct(rhs);

  // Assert
  Assert.AreEqual(expectedResult, dotProduct);
}

private static readonly object[] VectorDotProductCases =
{
  new object[] { new Vector(1,2,3), new Vector(0,0,0), 0 },
  new object[] { new Vector(1,2,3), new Vector(4,5,6), 32 },  
};

This is slightly less readable than using TestCase, but still it's better than replicating the tests for each set of input data.


Friday, February 28, 2014

Two readability tips - naming and organizing tests

I have mentioned earlier that readability is one of the pillars of good unit tests. Making readable unit tests is not trivial. It takes practice, but there are some good practices that can be applied to get a good start.

As an example, let's consider a test which verifies that a vector dot product is calculated correctly.

Naming

How do you name your unit tests? You should be able to get an idea of what a test verifies by just reading the name. Test names like this tell nothing about the tests:

[Test]
public void VectorDotProductTest1()
{
   ...
}

[Test]
public void VectorDotProductTest2()
{
   ...
}

This is more descriptive, but reading the names is still a bit akward:

[Test]
public void TestThatTheVectorDotProductIsZeroWhenOneOfTheVectorsIsZero()
{
   ...
}

[Test]
public void TestThatTheVectorDotProductIsDoneCorrectlyWhenVectorsAreNonZero()
{
   ...
}

Having a standard naming pattern like this makes it easier:

ItemUnderTest_Scenario_ExpectedBehaviour

As you can see, the name is divided into three parts, divided by underscores. The three parts are
  • Item under test: The item, usually a property, method or constructor, which is being tested
  • Scenario: The scenario, e.g. input data, parameters or other prerequisites
  • Expected behaviour: The expected result or outcome of the test

The previous examples would then be:

[Test]
public void DotProduct_OneVectorIsZero_ReturnsZero()
{
   ...
}

[Test]
public void DotProduct_VectorsAreNonZero_ReturnsCorrectProduct()
{
   ...
}



As you can see, dividing the names into sections following a defined pattern makes the names much easier to read.

Organizing the tests

How do you organize your tests? It should be perfectly clear to the user which part of the test is doing setup and preparation (arrangement), which part does the action which is being tested (act) and which part is doing the assertion (assert).

A common and recommended way to achieve this is to follow the "Arrange,Act,Assert" (AAA) pattern. The test is strictly divided into Arrange, Act and Assert sections like this:

[Test]
public void DotProduct_VectorsAreNonZero_ReturnsCorrectProduct()
{
  // Arrange
  var lhsVector = new Vector(1, 2, 3);
  var rhsVector = new Vector(4, 5, 6);

  // Act
  double result = lhsVector.DotProduct(rhsVector);

  // Assert
  Assert.AreEqual(26, result);
}


In this test, the AAA pattern makes it perfectly clear and obvious to the reader which part is doing what. Moreover, this pattern makes it easier to maintain the test. It's much harder to maintain a test where the asserts are spread all over the place.

If you find that it's tempting to add asserts in between the Arrange and Act sections, it's a sign that it's worth considering refactoring the test and/or the production code... or just take deep breath and have a coffee.


Monday, February 24, 2014

"We are too busy"

This seems to be the most common argument against test-driven development... ;-)



Wednesday, February 5, 2014

TDD and 3D visualization

So far I have been posting about fairly trivial stuff -- fundamentals and best practices of TDD. Let's step out of the comfort zone and talk about something less comfortable: testing of 3D graphics applications.

This is an area which is not covered much (or not at all) by text books or articles. Why? I think it's mainly because it's harder to test 3D graphics than testing numeric algorithms or database manipulation. Another reason is that the web application community seems to have been better at picking up TDD than the scientific or game development community.

It's not impossible, though. So where do we start? Let's have a look on a scenario where we want to develop an application with a 3D scatter plot of a 4 dimensional dataset. The plot has the following requirements:
  • All data samples shall be represented as spheres in 3D space
  • The first 3 dimensions shall be defined by the spatial X/Y/Z position in the plot
  • The 4th dimension shall be indicated with a color
  • In order to ble able to focus on a specific area and minimize the cluttering of the display, the user shall be able to interactively move a box-shaped probe in the plot and make the points outside of this box smaller.
These are typical requirements for a scientific application, but the principles for testing it can be applied to geological 3D models, medical data, games or other kinds of 3D graphics.

The plot should look like this (left). The user is focusing on a smaller area (right).

Know what you are testing

Testing 3D graphics can seem a bit daunting. How do you verify that the graphics card is producing the correct pixels animated on the screen? The short answer is: usually you shouldn't.

Keep in mind the pillars of good unit tests: test the right thing. Also keep in mind the Single Responsibility Principle. What is the visualization code under test doing? Is it actually producing pixels, or is it using a 3D rendering toolkit to do the visualization?

High-level visualization

3D visualization software often use a high-level 3D toolkit like Open Inventor, VTK or HueSpace to do the 3D rendering. In this case, you should trust that the 3D toolkit is rendering correctly whatever you instruct it to render. Your code is creating a scenegraph or a visual decision tree, and the 3D toolkit is doing the rendering based on this.

Let's say that we have a dataset and data sample class that looks like this:

public struct DataSample
{
  public double ValueDim1;
  public double ValueDim2;
  public double ValueDim3;
  public double ValueDim4;
}

public class Dataset
{
  public event ChangedEventHandler DataSamplesChanged;

  public IEnumerable<DataSample> DataSamples { get; private set; }

  ...and the rest of the implementation here
}

The plot view is a class which takes a dataset as constructor parameter and produces an Open Inventor scenegraph. A naive Open Inventor implementation might look like this:

public class ScatterPlotView
{
  private Dataset _dataset;
  public SoSeparator OivNode { get; private set; }

  public ScatterPlotView(Dataset dataset)
  {
    _dataset = dataset;
    this.OivNode = new SoSeparator();

    UpdateNode();
  }

  private void UpdateNode()
  {
    this.OivNode.RemoveAllChildren();

    foreach (var dataSample in _dataset.DataSamples)
    {
      var sampleRoot = new SoSeparator();
      var color = new SoMaterial();
      sampleRoot.AddChild(color);
      color.diffuseColor.SetValue(GetColorByValue(dataSample.ValueDim4));

      var translation = new SoTranslation();
      sampleRoot.AddChild(translation);
      translation.translation.Value = new SbVec3f(dataSample.ValueDim1, dataSample.ValueDim2, dataSample.ValueDim3);

      var sphere = new SoSphere();
      sphere.radius.Value = GetRadiusBasedOnWhetherSampleIsInsideProbe(...);
      sampleRoot.AddChild(sphere);

      this.OivNode.AddChild(sampleRoot);
    }
  }

  private SbVec3f GetColorByValue(double valueDim4)
  {
    // Look up color in a color table
  }
}

There are many scenarios that we might want to test here, but let's have a look on one specific scenario: the datapoint changes, and the scenegraph should change accordingly.

[Test]
public void OivNode_DatasetChanges_SceneGraphIsUpdated()
{
  // Arrange
  var dataset = new Dataset();
  dataset.DataSamples = new[]
  {
    new DataSample(),
    new DataSample()
  }; // Initial samples

  var scatterPlotView = new ScatterPlotView(dataset);

  // Act
  dataset.DataSamples = new[]
  {
    new DataSample(),
    new DataSample(),
    new DataSample()
  }; // Set 3 other samples

  // Assert
  Assert.AreEqual(3, scatterPlotView.OivNode.GetNumChildren());
}  

Here, we create a plot view with a dataset that has two samples. The dataset is then modified to have three samples, and the test verifies that the scenegraph changes accordingly. Note that we don't inspect the scenegraph in detail here. This test verifies that the scene graph is modified when the dataset changes and should assert on only that.

Other tests might traverse the scene graph and verify the position and color of each of the spheres in the scene graph. Just make sure that you don't overspecify the test. Don't write a test that will fail if the color scale changes slightly! In general you should test that the scene graph behaves correctly, rather than re-creating the logic in the scene graph construction.

User interaction testing

So far we have tested that the plot reacts to changes in the data. How about user interaction testing? This is actually similar to the previous test: make an action and assess the scene graph. The difference is how the action is performed: we need to mimic user interaction.

Again - know what you are testing! If you are using Open Inventor draggers, you don't need to emulate the mouse. That is not your responsibility -- it's Open Inventor's responsibility to transform mouse movements into dragger movements!

Let's write a test that verifies that the probe behaves correctly. The probe is represented with a SoTabBoxDragger which is added to the scene graph by the ScatterPlotView class. Here is one example of a test:

[Test]
public void OivNode_ProbeIsDragged_DataPointsOutsideBoxAreSmaller()
{
  // Arrange
  var dataset = new Dataset();
  dataset.DataSamples = new[]
  {
    new DataSample { ValueDim1 = 0.1 },
    new DataSample { ValueDim2 = 0.9 }
  };

  var scatterPlotView = new ScatterPlotView(dataset);

  // Act
  scatterPlotView.BoxDragger.translation.Value = new SbVec3f(0.5f, 0, 0);

  // Assert
  var sphereForSample1 = ...traverse the scene graph to find first sphere
  var sphereForSample2 = ...traverse the scene graph to find second sphere
  Assert.AreEqual(0.1, sphereForSample1.radius.Value);
  Assert.AreEqual(1.0, sphereForSample2.radius.Value);
}

Here we insert two points into the dataset. The dragger is moved so that in only contains one of the points, and the test inspects the scene graph to verify that the sphere sizes are correct.

If you write your own draggers instead of using Open Inventor's built-in draggers, you may need to emulate actual mouse coordinates. Still, you should make abstractions so that you can pass synthetic mouse events to the nodes rather than emulating actual mouse events.

Scene graph inspection

How do we verify that the scene graph is correct? One might create a very rigid test that verifies every node and node connection. That quickly leads to an overspecified test which reproduces the logic in the production code, however.

For Open Inventor, it's practical to use a SoSearchAction and inspect the resulting path(s). VTK has a similar mechanism for inspecting the pipeline. Just make sure that you don't copy the logic of the production code.

The scene graph related to the data samples in our plotter can be inspected like this:

var searchAction = new SoSearchAction();
searchAction.SetInterest(SoSearchAction.Interests.ALL);
searchAction.SetType(typeof(SoSphere));
searchAction.SetSearchingAll(true);
searchAction.Apply(scatterPlotView.OivNode);            

// Get all the paths that lead to a SoSphere:
var pathList = searchAction.GetPaths();

int pathCount = pathList.Count;

// We can use the path count to assert against the number of samples

foreach (SoPath path in pathList)
{
  var sphereRoot = path.GetNode(path.Length-2) as SoSeparator;
  bool hasTranslation = false;
  for (int i = 0; i < sphereRoot.GetNumChildren(); ++i)
  {
    if (sphereRoot.GetChild(i) is SoTranslation)
    {
      // We can assert that the sphere is positioned with
      // a translation. We can also check the actual position
      hasTranslation = true;
    }
  }       
}

In this example, we inspect the relevant part of the scene graph and verify that
  • There are N paths leading to data point spheres, which should equal to N samples
  • The sphere positions are determined by a SoTranslation. We could also check the actual position of the SoTranslation
This could be extended to verify that the correct material is assigned to the relevant spheres. The possibilities are endless, but consider splitting up the test so that each test is asserting on only one responsibility.

Inspection code like this makes a test hard to read, and should be put into helper methods. A test call would look like this:

var pathsLeadingToSphere = OivTestHelper.GetPaths(plotView.OivRoot, typeof(SoSphere));

Assert.AreEqual(3, pathsLeadingToSphere.Count);

Assert.IsTrue(OivTestHelper.PathContainsNode(pathsLeadingToSphere[0], typeof(SoTranslation));

Of course, you will usually not traverse the entire scene graph of a view at once. In complex views, you should divide the scene graph into smaller parts that can be tested individually.

Low-level visualization

For low-level visualization like ray tracing or fragment shaders where your code is actually responsible for doing the rendering, a test-first first approach is a bit harder. It's difficult to write a test that specifies what the result should be because you are producing a bitmap rather than a state.

The first step would be to ensure that you have a good separation between the low-level rendering code and the higher-level tier that is setting up states and handling user interaction, as the latter can be unit tested more easily.

I haven't found any practical approach to do proper TDD of low level visualization, but you can write regression tests that verify that the results are still correct after optimizations, generalizations and bug fixes. Given that the rendering code is able to render to a bitmap, you can do bitmap comparisons and compare future test sessions to a set of reference images.

Summary

Like I mentioned at the beginning of this post, unit testing 3D graphics is not trivial. The most important point is that you should test the state or the scene graph of the rendering engine and trust that the rendering engine does its job translating this into pixels. If you don't trust it, perhaps it's a good idea to use something else...

One might argue that this scene graph traversal and inspection introduces logic and complexity to the tests. This contradicts somewhat with what I have written in Pillars of Unit Tests. I try to mitigate this by creating helper methods for scene graph traversal. There is some logic involved, but this logic is hidden and doesn't obscure the readability of the tests. When you think about it, this is conceptually the same as calling NUnit's equality comparison for arrays or other non-trivial equality checks.

Monday, January 27, 2014

NCrunch - my favourite test runner

What is your favourite test runner? I have been using many different test runners over the years. Once in a while I find a better one and switch to that. Like I mentioned in my previous blog post Pillars of Unit Tests, one of the most important things when establishing a test environment is the ease of use. Running the tests should not be a hurdle.

NCrunch

I stumbled upon NCrunch some time ago, a .Net test runner for Visual Studio. This is represents a whole new paradigm when it comes to running tests. Actually the developer does not need to run the tests anymore -- NCrunch does it for you while typing. You don't even need to save the file!

While you edit the code, NCrunch uses coloured dots to the left of the code to indicate the test status of that particular line.

Hopefully your code usually looks like this, with happy green dots covering everything. That means your code is covered by passing tests. Production code is to the left and test code to the right:

All systems operational
Let's try to introduce a bug... just for the fun of it. Red dots appear at all lines in the production code that are covered by failing tests. Note the red x on one of the lines in the test. This is the offending line that makes the test fail.



What if we comment out some tests so that parts of the production code is no longer covered by tests? NCrunch marks the non-covered lines with black dots... and all bets are off!




Pretty impressing, and quite useful. You'll find it at http://www.ncrunch.net/.

By the way, I am not affiliated with the authors of NCrunch in any way. I'm just a happy user.

Saturday, January 25, 2014

Pillars of unit tests

What is a good test? It may seem like a trivial question; a test should flash red if something is wrong and be green otherwise. But how do we achieve that? There are some pillars, or golden rules, that should always be honoured:

Trustworthyness

Simple. If you don't trust the tests, they are not worth anything. You should be able to trust that tests 
    A) fail when they should
    B) don't fail when they shouldn't.

If a tests all of a sudden turns red and the response is "well, that test fails once in a while. It's normal", the test is not trustworthy.

What can we do to make tests more trustworthy?
  • Don't rely on things that can change in each test execution. Don't use timers or random numbers in tests. If this randomness causes failing tests from time to time, developers stop trusting them. A test that fails from time to time should be seen as an indication of an intermittent problem in the system under test, not a problem in the test.
  • Don't make assumptions on or introduce dependencies to the environment. If the tests depend on the file system, the graphics card or the phase of the moon, the tests will become fragile and fail for the wrong reason.
  • Don't overspecify the test. Have a clear vision of what the test is verifying. Don't add a bunch of asserts unless you have a very specific reason for adding them. Usually, each test should contain only one assert. Tests that are overspecified will often fail for reasons that are not related to the test itself. What happens then? Developers will stop trusting the tests!
Consider this test, which verifies that the tool class StringTools is creating the reverse of a string: 

    [Test]
    public void ReverseString_CalledWithString_ReturnsStringInReverse()
    {
      // Arrange
      string input = "abc";

      // Act
      string output = StringTools.ReverseString(input);

      // Assert
      Assert.AreEqual("cba", output);
      Assert.AreEqual(3, output.Length); // This is overspecification
    }

After verifying that the reversed string is returned, the length is also asserted upon. That's overspecification! The first assert is already doing a perfectly valid and sufficient verification. The second assert only makes the test more fragile and less maintainable because we can't change the length of the test string without also changing that assert. It adds complexity without yielding any value.

Maintainability

One of the most common pitfalls when doing TDD is test maintenance. As a system grows, it's easy to forget about the tests. As specifications change, it becomes hard to adapt the tests to the new requirements. Treat your test code with the same care as your production code!

Whenever you have finished a test, consider refactoring it. Consider whether the testee class becomes hard to test. Perhaps it's time to refactor both the test class and the testee class.

If each of the tests for a specific test class is doing multiple lines of setup, it might be a good idea to refactor this into helper methods.

As an example, consider these tests, where several methods in the "TesteeClass" class is being tested:

    [Test]
    public void SomeMethod_CalledWithNull_ReturnsFalse()
    {
      // Arrange
      var testeeObject = new TesteeClass();
      testeeObject.Initialize();
      testeeObject.SomeProperty = new SomeOtherClass();

      // Act
      bool result = testeeObject.SomeMethod();

      // Assert
      Assert.IsFalse(result);
    }

    [Test]
    public void SomeOtherMethod_Called_ReturnsTrue()
    {
      // Arrange
      var testeeObject = new TesteeClass();
      testeeObject.Initialize();
      testeeObject.SomeProperty = new SomeOtherClass();

      // Act
      bool result = testeeObject.SomeOtherMethod();

      // Assert
      Assert.IsTrue(result);
    }

You enhance readability and maintainability with some refactoring:

    [Test]
    public void SomeMethod_CalledWithNull_ReturnsFalse()
    {
      // Arrange
      var testeeObject = this.MakeTesteeObject();

      // Act
      bool result = testeeObject.SomeMethod();

      // Assert
      Assert.IsFalse(result);
    }

    [Test]
    public void SomeOtherMethod_Called_ReturnsTrue()
    {
      // Arrange
      var testeeObject = this.MakeTesteeObject();

      // Act
      bool result = testeeObject.SomeOtherMethod();

      // Assert
      Assert.IsTrue(result);
    }

    private TesteeClass MakeTesteeObject()
    {
      var testeeObject = new TesteeClass();
      testeeObject.Initialize();
      testeeObject.SomeProperty = new SomeOtherClass();

      return testeeObject;
    }

Readability

Your tests need to be readable. They are your code-level functional requirement document. If a test is hard to read, it's hard to maintain. It is also hard to figure out why it fails.

Pretend like an axe murderer is going to read your tests. He knows where you live, and he gets really mad if he can't understand your tests. You don't want to make him mad.

  • Avoid using loops and logic. It should be obvious for the reader what the test does. If a test contains loops, if's, logic and other constructs, the reader needs to think in order to understand the test.
Production code vs test code.
  • Use the smallest possible dataset. If you develop an algorithm, use the smallest possible dataset needed to verify the algorithm. This will make it easier to read the test, and execution will be faster.
  • Avoid using magic numbers. If a test contains cryptic numbers, the axe murderer will wonder whether the number has a meaning. Use the lowest possible number so that it's obvious that the number is just an arbitrary input number.
Consider these two tests, one using magic numbers and one using the lowest possible number:

    [Test]
    public void AddNumbers_CalledWithTwoNumbers_ReturnsSum()
    {
      // Arrange
      double number1 = 54254; // Does this number have a meaning??
      double number2 = 64333;

      // Act
      double sum = Calculator.AddNumbers(number1, number2);

      // Assert
      Assert.AreEqual(118587, sum);
    }

    [Test]
    public void AddNumbers_CalledWithTwoNumbers_ReturnsSum()
    {
      // Arrange
      double number1 = 1; // It's obvious -- it's just an arbitray number
      double number2 = 2;

      // Act
      double sum = Calculator.AddNumbers(number1, number2);

      // Assert
      Assert.AreEqual(3, sum); // It's easier to understand the expected result
    }

Ease of use

Although this item does not pertain to the tests themselves, it is equally important. It should be easy to run tests. They should run fast. Once it becomes a hurdle to run the tests, developers will stop running then. It also becomes hard to get into the smooth test-driven flow where you develop test and production code in parallel.

Make sure that your tests run fast, and choose a test runner that allows you to run tests easily. Visual Studio has a decent test runner if you code .Net. If you use ReSharper, you have an even better test runner.


Happy TDD'ing!

Thursday, January 23, 2014

Why do TDD?

So what is test-driven development all about? Why do TDD?

The short version is: TDD is the practice of writing tests before, or in parallel with, the production code. The developer will know immediately whether the code is working as expected without starting the application. Moreover, the tests are run automatically, developers will be alerted immediately if they do changes to the code that introduce defects.

Numerous articles are written about the benefits of TDD. To summarize, TDD gives a higher code quality if it's done right.

Benefits of TDD from a technical point of view

So what's in it for us developers? Let's have a look on some of the benefits of doing TDD.

Bugs are caught earlier

Has it ever happened to you that you have developed something, and someone else (or you) introduces bugs a year later? Perhaps some new functionality was added, or some optimizations were done, and all of a sudden your favourite algorithm failed?

Was it hard to figure out when this bug was introduced and hence hard to fix it? Perhaps the bug even found its way to the customer?

Well, you are not alone! That happens to us all. The good news are; the probability of discovering the bug immediately is much higher if the existing code is covered by tests!

You do rather want to let a test find the defect... ...than having an angry customer find it

Refactor with confidence

Refactor often, they say. Frequent refactoring yields a continuous improvement of architecture as the codebase increases. So how do you know that the refactoring does not introduce defects? By having automated tests, of course! If you can refactor with a lower likelyhood of creating defects, you can refactor more often.

Better architecture

Now that's a bold claim. How can tests enhance the architecture? Because doing TDD with a poor architecture is painful. Good tests and a proper TDD approach requires that the production code follows the SOLID principles. You may not have heard about the SOLID principles, but you do most likely use them. These principles include commonly accepted best practices like the Single Responsibility Principle (let your class do one thing only), decoupling and dependency injection. 

If you write the test first, you are forced to follow these principles. If it turns out that a class is not testable, it usually means that you are violating one or more of these principles. Then it's a good idea to do refactoring, or perhaps the class should be redesigned altogether.

Bottom line is - TDD is an efficient design tool because it encourages a clean and decoupled design. Complex code is hard to test and is a bad idea to begin with!

Where to start

So how do you get started with test-driven development? That's a too large topic to cover in this blog post, but I highly recommend getting a book. You will save yourself from countless hours of frustration if you get a fundamental understanding of writing tests instead of reading random articles on the web.

The Art of Unit Testing by Roy Osherove is a very good book. Make sure that you get the 2nd edition, as this is more updated on tools and methodology.

One last word: remember that test-driven development is not magic. It will not magically solve all your problems and make your codebase bug free tomorrow. It's a tool that, if used correctly, will help you create software with fewer defects, higher quality and better architecture.

TDD is software development done the scientific way. Good luck!


Welcome to TDD Addict

Welcome to by blog about test-driven development (TDD)!

I am a software engineer in Blueback Reservoir, a company specializing in providing consulting services and software solutions for the global oil & gas exploration and production industry.

One of my favourite topics within software development is automated software testing. Every so often we all stumble upon various challenges related to unit testing, and I will publish random thoughts on the topic here. I hope that I can help others with improving their TDD skills by sharing my experiences with you.

Happy TDD'ing!