Citrix testing of XenServer is very highly automated, as described here. As Citrix embarks on a major investment to bring Citrix CloudPlatform testing up to a similarly high level of automation, it is instructive to review the cost models for automated versus manual testing. The model presented here is based on data gathered from years of experience in XenServer automation.
For simplicity I split testing into three phases:
- Creation of an automated test framework
- Creating test cases for a given new feature and testing it for its first release
- Regression testing that feature on subsequent releases
Creation of automated test framework
For manual testing, the cost is obviously zero.
For automated testing, this will cost some (probably large) amount P, although this cost will be amortized across all the features that get tested on it.
Creating test cases for a given new feature and testing it for its first release
Cost of specifying and running manual test cases = 2N
Cost of specifying and developing automated test cases (in first 1-2 years before library built up) = 4N
Cost of specifying and developing automated test cases (subsequently, once library built up) = 3N
In summary, the cost of developing automated test cases is high, especially when you first start out and are not yet enjoying economies of scale.
It is also critically important to avoid the temptation to squeeze this initial automation cost – doing so inevitably means you end up with unreliable and low-quality test cases that not only have a high ongoing cost-of-ownership but also fail to do an effective job of validating your product.
Regression testing that feature on subsequent releases
Cost of manual regression testing the feature (no automation available) = N
Cost of automated regression testing the feature = N/5
The reason why automated regression testing is not entirely free is the need to triage test failures, which is a non-trivial exercise with complex products, test frameworks and lab environments.
So for each new feature the cost comparison (automation vs no automation) looks like this, where N=10.
Automated testing is capital-intensive due to the need for large amounts of hardware and the need to regularly refresh it to keep the automation running reliably. However this is offset by the very high hardware utilization – in the XenRT lab, which runs tests 24×7, utilization is near 100%. Of course manual testing also requires hardware – however there is less need need for this hardware to be up-to-date and reliable, hence yearly refresh costs will be lower. What is indisputable is that utilization under manual test will be far lower.
Developing automated test cases for new features is expensive. It is especially expensive in the initial phase before a library of re-usable utility functions has been built up. It is usually more expensive than manually testing the same feature. However there is a huge pay-off in the medium term due to the vastly decreased cost of regression testing on future releases (quite apart from the increased product quality and consequent reduction in maintenance costs.) Analysis of XenServer automation suggests that achieving a comparable level of regression test coverage via manual testing would require a yearly operational expenditure (in low cost countries) five time greater than is spent on operating, maintaining and refreshing the automation lab.
Finally, note that automated testing is not a sufficient guarantor of quality. It must be complemented by expensive manual testing. However with extensive automation in place to take care of routine regression testing, the expensive manual testing can be targeted at negative conditions, edge cases and known problem areas thereby giving a much higher rate of return.