I have my roots in the Atari demo scene, so I know the joys of optimizing the inner-loop of a software texture mapper down to the last assembly instruction. The bragging rights of knowing that not a single clock cycle is wasted. I am also grown up enough to know that this is not a skill many needs to practice. In fact going about in this way on a customers project would be fraudulent, even if I could dazzle them with how cool optimizing for branch predictions is.
I can understand the joys of making sure that every potential bug is removed, and the bragging rights of 100% automated test coverage as well. But I would argue that it is equally fraudulent to claim 100% automated test coverage to a client as a virtue. I am not arguing that automated tests should be abandoned, far from it. I am arguing that automated tests should be applied with the same care as optimizations.
Premature Automated Testing is Bad
We have all learned that premature optimizations are bad. Guessing what could fail is hard, the only safe route would be to write a check for everything! There is even a coding practice encouraging this, test driven development or TDD for short. Which should really be called ADD for assertion driven development. Since what is done is not testing, but checking by asserting conditions against a specification.
Take a moment and answer honestly; for any project with good automated test coverage what is the ratio between time spent writing the checks versus the time these checks have saved you? In my experience the more zealously automated test coverage is applied, the more time is spent maintaining the tests. For any project where automated tests have been written first, and code later, I have yet never experienced that the checks have saved more time than the time they cost to develop and maintain.
When optimizing it is hard to find the bottlenecks. Gut feeling is good many times, but often even your gut feeling is wrong and the performance bottle neck is somewhere you would never expect. It is the same with defects in code, your gut feeling is good for pin-pointing many defects, but often the cause is something you never expected. So instead of wasting time guessing what the problems may be down the road you should inspect and measure where the issues are. And when an issue is found then you write a check for it to ensure it does not break again.
Change the Algorithm
The best optimization is often not to optimize at all, but to change to a better algorithm. Writing a bubble sort algorithm in assembly will never yield a better result than a sloppy merge sort written in C anyway, so do not waste your time.
Same can be said about code quality. If you have many issues, and your code breaks often, adding automated tests will only be bandaid until it breaks the next time. Change the code into something more stable. If needed, write checks to ensure the new implementation conforms to the same public API when you refactor it. Take notice to write checks with as good coverage as needed when you are about to refactor your code not before.
If you architect your application with many small independent units, each with a clearly defined public API, then refactoring is not a big problem.
All green lights and a 100% automated test coverage looks good on a report. But it can never catch what is most important of all; is the application valuable to the user? In that regard the 100% automated test coverage is a false safety, it gives the illusion of quality. A dangerous illusion that can be used to blind managers, customers and other stakeholders from seeing the real issues with the application.
There is yet no testing framework made that is as efficient as a human being actually using the application. Not just people blindly following a test protocol, that is even worse. But stake holders using the application as intended. Not just short burst before committing code to source control, but also regularly on longer sessions. So that you get a feel for how the app is used, what works, or just what grinds your gears. Usability issues are also issues and should be dealt with.
Not every issue is created equal. With the limited time every project have fixing, or predicting, each and every one of them is never feasible. So do not waste time fixing the one in a million issue, or writing checks for the bread and butter code. But do make sure the application fails gracefully. Fail as gracefully as you can for the end-user, they are very forgiving if you fail with style and make sure no data is lost. But also fail gracefully for yourself, be generous with logging when you do fail. And remember to add an automated check so that the reason for the failure may never raise again, once you fixed it.
Model View Controller
All application logic can be divided into three categories. Model for the business logic of your product, View for the display and input to and from the user (or other application), and Controller that is the mediator between the two. Any application of any significant size will have smaller MVC patterns within their units as well. Well defined boundaries between what is a Model, View and Controller is not only good for re-usability, maintainability, and replacing parts. It is also great for testing. Have well defined and testable public APIs for the parts, and the private implementations tend to be simpler and of higher quality. Do not let the public API be defined by how something is implemented, let the implementation be defined by how you want to access it from the outside.
Writing unit tests for the Model of your application is seldom a waste of time. Often the Model is what you begin implementing, so using unit tests as the incubator until you have enough logic in place to implement the first draft of your application is only rational; code that is unused is seldom of any quality at all.
Writing unit tests for the View yield much less returns on investment. It is also highly probably that you will have to write these tests over and over again. I would go so far as call them harmful, too many automated tests on the Views may in fact discourage you from doing drastic but needed usability changes. And no automated test in the world can ever flag if a View is usable, looks good, and feels right to the end-user.
Writing unit tests for the Controller should be a waste of time, otherwise your Controller is doing too much. A Controller should only be the mediator between the View and Model, if it does more then it should be split up and the proper parts moved into Model and View layers. So writing unit tests for a good Controller is basically just verifying that your are using the APIs of the Model and Views correctly.
We have learned that premature optimization is bad, often counter productive or wastes our time for minimal gains. Let us all grow up and also learn that premature automated testing is equally bad. The time we have is to precious to waste on overzealous automated testing instead of adding real value to the product by applying tests where it makes real difference to the products quality.