Automated Testing is Like Optimizing

I have my roots in the Atari demo scene, so I know the joys of optimizing the inner-loop of a software texture mapper down to the last assembly instruction. The bragging rights of knowing that not a single clock cycle is wasted. I am also grown up enough to know that this is not a skill many needs to practice. In fact going about in this way on a customers project would be fraudulent, even if I could dazzle them with how cool optimizing for branch predictions is.

I can understand the joys of making sure that every potential bug is removed, and the bragging rights of 100% automated test coverage as well. But I would argue that it is equally fraudulent to claim 100% automated test coverage to a client as a virtue. I am not arguing that automated tests should be abandoned, far from it. I am arguing that automated tests should be applied with the same care as optimizations.

Premature Automated Testing is Bad

We have all learned that premature optimizations are bad. Guessing what could fail is hard, the only safe route would be to write a check for everything! There is even a coding practice encouraging this, test driven development or TDD for short. Which should really be called ADD for assertion driven development. Since what is done is not testing, but checking by asserting conditions against a specification.

Take a moment and answer honestly; for any project with good automated test coverage what is the ratio between time spent writing the checks versus the time these checks have saved you? In my experience the more zealously automated test coverage is applied, the more time is spent maintaining the tests. For any project where automated tests have been written first, and code later, I have yet never experienced that the checks have saved more time than the time they cost to develop and maintain.

When optimizing it is hard to find the bottlenecks. Gut feeling is good many times, but often even your gut feeling is wrong and the performance bottle neck is somewhere you would never expect. It is the same with defects in code, your gut feeling is good for pin-pointing many defects, but often the cause is something you never expected. So instead of wasting time guessing what the problems may be down the road you should inspect and measure where the issues are. And when an issue is found then you write a check for it to ensure it does not break again.

Change the Algorithm

The best optimization is often not to optimize at all, but to change to a better algorithm. Writing a bubble sort algorithm in assembly will never yield a better result than a sloppy merge sort written in C anyway, so do not waste your time.

Same can be said about code quality. If you have many issues, and your code breaks often, adding automated tests will only be bandaid until it breaks the next time. Change the code into something more stable. If needed, write checks to ensure the new implementation conforms to the same public API when you refactor it. Take notice to write checks with as good coverage as needed when you are about to refactor your code not before.

If you architect your application with many small independent units, each with a clearly defined public API, then refactoring is not a big problem.

Perceived Quality

All green lights and a 100% automated test coverage looks good on a report. But it can never catch what is most important of all; is the application valuable to the user? In that regard the 100% automated test coverage is a false safety, it gives the illusion of quality. A dangerous illusion that can be used to blind managers, customers and other stakeholders from seeing the real issues with the application.

There is yet no testing framework made that is as efficient as a human being actually using the application. Not just people blindly following a test protocol, that is even worse. But stake holders using the application as intended. Not just short burst before committing code to source control, but also regularly on longer sessions. So that you get a feel for how the app is used, what works, or just what grinds your gears. Usability issues are also issues and should be dealt with.

Not every issue is created equal. With the limited time every project have fixing, or predicting, each and every one of them is never feasible. So do not waste time fixing the one in a million issue, or writing checks for the bread and butter code. But do make sure the application fails gracefully. Fail as gracefully as you can for the end-user, they are very forgiving if you fail with style and make sure no data is lost. But also fail gracefully for yourself, be generous with logging when you do fail. And remember to add an automated check so that the reason for the failure may never raise again, once you fixed it.

Model View Controller

All application logic can be divided into three categories. Model for the business logic of your product, View for the display and input to and from the user (or other application), and Controller that is the mediator between the two. Any application of any significant size will have smaller MVC patterns within their units as well. Well defined boundaries between what is a Model, View and Controller is not only good for re-usability, maintainability, and replacing parts. It is also great for testing. Have well defined and testable public APIs for the parts, and the private implementations tend to be simpler and of higher quality. Do not let the public API be defined by how something is implemented, let the implementation be defined by how you want to access it from the outside.

Writing unit tests for the Model of your application is seldom a waste of time. Often the Model is what you begin implementing, so using unit tests as the incubator until you have enough logic in place to implement the first draft of your application is only rational; code that is unused is seldom of any quality at all.

Writing unit tests for the View yield much less returns on investment. It is also highly probably that you will have to write these tests over and over again. I would go so far as call them harmful, too many automated tests on the Views may in fact discourage you from doing drastic but needed usability changes. And no automated test in the world can ever flag if a View is usable, looks good, and feels right to the end-user.

Writing unit tests for the Controller should be a waste of time, otherwise your Controller is doing too much. A Controller should only be the mediator between the View and Model, if it does more then it should be split up and the proper parts moved into Model and View layers. So writing unit tests for a good Controller is basically just verifying that your are using the APIs of the Model and Views correctly.


We have learned that premature optimization is bad, often counter productive or wastes our time for minimal gains. Let us all grow up and also learn that premature automated testing is equally bad. The time we have is to precious to waste on overzealous automated testing instead of adding real value to the product by applying tests where it makes real difference to the products quality.

This Post Has 5 Comments

  1. Anders Janmyr

    I woulld say that it is highly dependent on your development environment and your tooling. When I am developing in Ruby and Rails I can write very precise expectations (or tests) for both models and controllers before I write the code. The fact that I write the expectations before the actual code allows me to focus my development on a single method at a time. I find this extremely valuable, since instead of running the whole application to verify my change manually I can let the code verify it for me. The fact that I can run the test automatically afterwards is just an added bonus. I currently don’t write many view specs, but I am seriously considering it if the views contain much application logic. My world mostly consist of web applications so it is different and easier to test than native applications. But every time I have been to lazy to write tests first for a complex method it has bitten me by forcing me to spend considerably more time fixing the errors than if I had written the test like I’m used to.

    The solution is not to stop testing, but to improve your tooling so that you can trust your tests!

  2. Fredrik Olsson

    I think what you mention by writing expectations for model and controller is the same thing as I briefly referred to as “using unit tests as the incubator [for the model]”. I do still believe that too much automatic checks on the controller is a sign of the controller becoming too bloated.

    I never said anyone should stop testing, nor should anyone stop optimizing. It is a question of applying the resources you have where they make the most impact. And I argue that over-relying on automatic checks leads to waste.

    The tools are as you say important. The tools available, the target platform, and the target users, all contribute to what testing strategy is most effective. Asserting that automatic checks always is the answer is false.

  3. Christian Hedin

    Automated testing is not always the answer, but it’s usually a very good practice – especially in model as you say. You’re probably also right about “premature automated testing”. Don’t do it if it doesn’t provide business value. Although there is an important difference between putting effort into automated testing and optimization. The main goal of optimization is to make things run faster or more resource efficient. The main goal of testing is to assert the quality, and in the case of iterative development, to improve the quality. Ignoring quality is a really dangerous, and expensive, path to take. Still, not all automated testing provides value, and if it doesn’t then you shouldn’t do it.

    Personally I haven’t been in many projects where it’s been a major issue that too much automated testing is being performed, or that it would affect delivery. I have been in projects where automated tests aren’t being maintained, or where they follow the architecture too rigidly. This I think is an architectural problem. I have been in many projects where too little automated testing has been done, and it has affected delivery and post-delivery quality.

    The Objective-C community suffers badly because the available tools for automated testing are horrible (shame on you Apple) compared to other platforms. There should be much more automated testing here, not less. As soon as the tools catch up (there are good 3rd party efforts going on) I think we’ll see much more TDD and BDD in the Objective-C community as well.

  4. Ben Mochoi

    I think you misunderstand TDD.
    Having automated tests is one benefit of TDD.

    Mostly though, it improves the design of your code, keeps it clean, makes sure you only code what you need.

Leave a Reply