I often see this question sometimes gets asked in various forums on-line, and everyone rushes in to say “yes of course you should” and although I do agree, I don’t think automated testing should just be blindly included in every piece of work. So I wanted to describe the scenarios where automated testing really is beneficial.
- Complex distributed logic that it is impossible to get your head around quickly, particularly after a long time away from the code
- Complex isolated logic that has so many permutations that it is hard to cover all of them with a manual test
- Logic that is dependent on scenarios that are difficult to reproduce with manual testing
- Logic on which we depend but don’t control (third-party packages or APIs)
Complex Distributed Logic
This is the kind of logic that has multiple moving parts, distributed as separate services within a whole solution, and a change to one part can inadvertently bring a large part of the application down.
The testing will be in the form of high-level integration tests, either because unit test coverage isn’t good enough, or we haven’t had enough time to isolate mock scenarios but have had time to generate fake data for them (pretty much the same as not having enough unit test coverage).
This kind of automated integration testing stops development (and likewise refactoring) from grinding to a halt when it’s impossible to get a run-through of the entire application into your head at one time.
Complex Isolated logic
Sometimes a change, particularly a bug fix, appears on the surface to be simple but in reality has so many permutations that it is difficult to pin down all the scenarios it has to support. Automated testing is invaluable here, and can be the difference between a successful deployment and an immediate rollback.
I’ve been in scenarios where QA is waiting bug fixes to be deployed to their test environment and their tests couldn’t be allowed to fail due to an upcoming release window. I had time constraints of my own (often needing to complete fixes within a matter of hours) and without unit tests it would be impossible to develop quickly and be confident that the fixes would work.
This is like the unit testing equivalent of the integration testing of distributed logic above.
Hard to Reproduce Test Scenarios
If you’ve ever done work across time zones you’ll know that it’s unfeasible to manually test an application by changing the timezone of the local machine’s clock and running through a test script. The only way to really test this kind of thing is by injecting a system clock into your code, and faking an instance of it for your tests.
This also applies to code that tests various permutations of asynchronous result handling. It’s impossible to manually reproduce results being returned in certain orders and after certain times, without faking it in a test.
There is some overlap here with 2. Complex Isolated Logic, as the hard to reproduce scenarios often pin down complex logic.
Being able to fake a third-party is incredibly useful. We can make our assumptions explicit in our mocking code, start to build before new third-party functionality is available, fake exceptional behaviour, and make expensive API calls without incurring a cost. All of this often makes a good third-party mock well worth the development effort.
Building a mock of a third-party is often a no-brainer as our automated tests can’t be run against a live API. We can sometimes take an approach that it isn’t our concern whether a third-party API works as expected, as we can always raise a ticket if it doesn’t, but an accurate mock saves us from any last-minute surprises.
This ties in with 3. Hard to Reproduce Scenarios, as it we can use out fake third-party to reproduce error conditions which we could never trigger against a real API.
Of course after writing this all out I’ve come to the conclusion that at least one of the points above is likely to be satisfied in most non-trivial applications sooner or later, meaning that tests will become essential sooner or later.