We all know testing is an essential part of product development, performance testing is a big part of testing, especially when we are dealing with the web based applications where customers will be lost if server crashes under high volume or response time becomes too long. Unfortunately many project involves the performance testing only in the end, as part of the acceptance testing. Common as it seems, there are serious problems with this approach.
In some cases, performance test result shows that considerable amount of performance improvement is needed for the release of the product. Now developers have to search all over the code to find places that can be optimized. It may seem like a bug hunt, but it's different because it is not about logic errors. Afterall, logic of code may be all right after going through rounds of unit-testing and functional testing. Developers may have to do trial-and-error by making some changes and see how much performance gains they get. The target for performance may be so high that it will take weeks if not months to get there. I have seen some developer staring at the code so hard for optimization and they found some real bugs missed in unit-test or functional-test :-). Obviously it's not the reason why performance test should be done at the end.
In some other cases, performance testing shows that performance number is seriously lower than expected and after some experiments, a harsh reality comes to surface: a part of design is fundamentally wrong. After month's of development and testing, things have to go back to square one. This may seem impossible but it does happen from time to time. I was involved in a project, where the performance bottleneck turned out be disk IO. Simple as it may appear, it was found out after
- trying different web servers, thinking asynchronous server can solve the problem
- trying various database optimization, hope one silver bullet can save them.
The cold and hard fact found by the performance testing is that: the bottleneck is on retrieving records from disk. How fast the records can be retrieved from disk depends on the access pattern, the database and the disk. A totally different database had to be used for that project.
Now, why isn't early performance testing commonly practiced? One of the reasons is that
it requires a non-trivial amount of resources. Some times, such performance tests involve some programing, so some developer resource has to be reallocated to do performance test development. Is it worth it? Very likely. It can be a perfect application of the classic principle of "one ounce of prevention may worth a pound of cure". An added benefit is that testing team may also benefit from the tools/procedures created by the developer.
Another possible reason is, performance testing requires a tool that is neither simple
to use nor cheap/free. In today's explosion of software products, new tools are coming up like
mushrooms after a rain, just check around and you will find the pleasant surprises. For
example, in the area of application performance testing, new tools and low-cost/free services
like gatling, loader.io, NetGend are all new kids in the block and waiting to help you
do the early performance testing.
Looking back on some of the projects, I feel they could benefit tremendously if we have done performance testing from the start - when the project was still in its infancy. We could have found out whether the major decisions (like the choice of database) were fundamentally wrong early on. Just like curing a major disease (think cancer), making changes in the early stage is much easier than making changes in the late stage:
- there is not a lot of time and effort invested,
- the code structure is relatively simple. Layers of patches on the code base during the course of project can make it very hard, if not impossible to hunt bugs, especially performance bugs,
- If there is a performance bug introduced in a build, it’s much quicker to find the delta between the builds and zero in on the bug by applying exclusion on the delta. It often happens that when the delta is reasonably small, developer(s) can spot the performance bug immediately.
Is early performance testing possible when development has just started? Not always, but early performance testing should be done as soon as it becomes possible - definitely before the end of the project. Once it becomes possible, performance test should be performed regularly just like regular testing to find bugs introduced in various builds. Performance bugs are just as nasty as functional bugs and sometimes performance bugs can be harder to hunt since they are typically correct in logic.
Serious performance issue can often derail a product, cause a project to miss its deadline. Are you ready to use one ounce of performance testing and save one pound of cure?