Despite our best efforts bugs will be with us forever. Human error, limited data sets, and perhaps most importantly time keep us from finding all the issues. We must try. So we deploy numerous approaches to finding them. Which reminds me of the saying about advertising attributed to John Wanamaker.
Half the money I spend on advertising is wasted; the trouble is I don't know which half.
That same sense of uncertainty can creep into software testing too.
Exhaustive testing certainly gives you the best chance. Look at the largest software as a service company and you’ll find things like the Hammer team. The Salesforce.com Hammer team dedicates itself to regression testing all customer functional tests before each major release. Running, diffing, root causing, and fixing any issues among the more than 60 million functional tests (2013). They find and fix issues that literally are 1 in a million. It is worth it for companies and customers at that scale.
But most of us are at a much smaller scale and budget. In general the smaller the customer segment, adoption, budget, or contract, the more concise your testing. So here are 5 ways to make the most of your testing time.
Automated Unit Testing
First off, there is no replacement for unit testing. Any code can be assumed broken if it isn’t tested. Automated CI and unit testing offers a direct and quick way to accomplish testing. Good small orthogonal tests make the best impact. There is certainly some debate on unit testing style. In my opinion, you can go with a TDD approach and use them as specification or look at them as design validation. You’ll still be better for it.
Getting the Ratios Right
So how do you optimize your automation coverage? I assume most people reading this are familiar with the test pyramid. As soon as you see it, it makes perfect sense. It’s an embodiment of not only good ratios, ala the food pyramid, but that time is precious. The base of the pyramid is unit testing. Unit gives you focused testing on the smallest possible code. Faster tests with less complexity give you more opportunity to fix issues and easier problems to identify. As you go up the pyramid it gets slower and more complicated.
Even when you don’t live up to the gold standard, the model gives direction on how to tailor your tests or refactor your code.
DogFood
If applicable you should use your product to get things done. First hand experience with dogfooding your app illuminates everything with bright lines. The trick is making sure you keep the feature set focused on what the real customers want.
Bug Hunt
Testing is much more fun when you put it in the context of a game. Take a specific time and get the team to identify test debt or tricky areas. Start a timer and get to testing. The sky's the limit on how to organize these events. You could compete/collaborate on bugs found, code covered, tests written, performance improved, treasure hunts, bingo, etc… You could even figure out a way to map capture the flag to testing. Seriously, just try it. Any friendly game will be a welcome approach.
At Runnable we call them Bug Hunts and use them around milestones to augment the normal testing. The competitive and lighthearted nature brings out creativity and bug finds. At its root it changes everyone's perspective, broadens thinking, and adds fun incentives.
Canary Tests
One other approach that economizes on the tight time constraints are synthetic transactions or canary tests. In a small team or product it's likely not worth spending time on rich end to end tests. The product, feature, or technology are all more likely to change. Instead focus on a small set of tests you can run on your production environment. You can use them to alert on any production issue. The value of investment vs. customer experience aligns perfectly. Obviously you should run them before you deploy but they primarily monitor production quality.
At Runnable we use 3 primary canary tests. Each follow a core user flow (build, logs, and source control integration). They also are built up of only a handful essential steps in that user’s experience. They also take special attention to make sure the state is cleaned up after each run. They’ve proved very accurate at identifying real production issues. Each of them run on timers in our batch processing system. They use APIs to update our DataDog reporting and then trigger PagerDuty on any issues.
Most of these items work double duty which help you economize your efforts. For unit testing it often doubles as specification, dogfooding its your service, and canary tests are production monitoring. Each approach can help make a solid improvement in your testing. Depending on context there are tons of other tools to bring to bear to the bug search. In future posts we’ll cover even more than these core tools.