In 2011 Citrix acquired cloud software startup cloud.com. Their CloudStack software product was subsequently donated to Apache where it became the Apache CloudStack open source project, with a Citrix commercial product derived from it (Citrix CloudPlatform, hereafter CCP).
As with many software startups the emphasis was on innovating and establishing market position. Cloud.com and then Citrix concentrated on rapidly bringing a product to market and then maintaining this product’s lead by growing its feature set. The emphasis was on being responsive to customer and market demands in the fast-paced cloud orchestration sector. Citrix remains in a great position to continue this great track record.
Over the last year or so the Citrix CloudPlatform engineering team has continued to raise the bar in [JG1] improving both engineering practices and product quality.
This blog describes one of the measures being taken to ensure that Citrix CloudPlatform [JG2] continues as the proven market-leading cloud orchestration product.
The general approach to quality engineering for Citrix Cloud Platform is more fully described here, in an article written for XenServer but which also holds true for CCP .
In essence our strategy is to continually improve both the quality of our engineering practices and the quality of our product itself. This blog considers one specific aspect of this – that of validating that CCP meets its quality goals through automated testing.
What we test, and when we test it
For reasons elaborated in this blog on engineering efficiency, it is important to perform different types of testing at different points in the software development lifecycle. Each level of testing has a different goal – but the overall goal is to improve quality and optimise engineering efficiency by detecting and fixing bugs as close to the point of injection as possible.
For CCP we currently perform the following:
- Short-cycle testing by developers on their own machines prior to commit to Apache CloudStack Master or CCP release branch – this must be fast turnaround testing, and is used to catch basic defects as close to injection as possible. Discussion is ongoing on the Apache dev lists about systematizing this through a Git workflow (or equivalent).
- Long-cycle testing of ACS Master – this is a longer running, and therefore less frequently run, regression test of functionality on ACS Master. Citrix currently runs a series of BVT suites on each supported hypervisor and networking configuration, each BVT taking around 1 hour to execute. Citrix also currently runs a series of longer Regression suites, again on each supported hypervisor and networking configuration. These take many hours to execute. Our strategy is to continue to increase the level of test coverage in these suites, while also increasing lab capacity to ensure they can all be run sufficiently frequently to capture defects early.
- Long-cycle testing of CCP – this is very similar to the above. However it is run on a sophisticated Citrix-internal automation framework known as XenRT, more fully described here. This runs the same BVT and Regression suites as above but on CCP builds. In addition it runs sophisticated non-functional testing of CCP including stress, scale, performance, stability, upgrade, resiliency and interop tests. These are key to improving and maintaining the overall customer experience on CCP. Critically they also help ensure that CCP runs smoothly in conjunction with Citrix XenServer, Citrix NetScaler and Citrix XenDesktop.
In parallel with the automated testing described above, CCP is subject to manual testing to catch bugs that automated testing is not well suited to. And once all of this has happened, CCP will be subject to further testing in Citrix’s Solutions Lab, where CCP is deployed and tested at scale alongside many other Citrix products.