It was fun to be on stage at the end of summer at the Burton Catalyst conference in San Diego, faced by the somewhat over-emphatic Scott Drummonds of VMware in a debate about performance and benchmarking. In that tussle, Scott tried to justify why VMware’s continued resistance to the publication of independent benchmarks on virtualization performance is “reasonable”. Basically, VMware reserves the right to prevent publication of any comparative benchmarks between their products and those of VMware competitors, based on the idea that users are incompetent at undertaking benchmarking themselves. Now, I don’t want to re-open a can of worms in this context, but I think that it is worthwhile to give credit, as the year draws to a close, to a couple of major achievements in the world of benchmarking during 2009.
The first is a public acknowledgement of the incredible work done by Project Virtual Reality Check, which has developed, and continues to develop an independent, customer focussed benchmarking tool set that is entirely independent of any virtualization vendor, and moreover incredibly useful for identifying bottlenecks in virtual infrastructure that can limit performance. So, hats off to Reuben and Jeroen for developing the industry’s most practical and credible virtualization benchmarking suite. The VRC suite is now used at Citrix to ensure that we are meeting performance requirements, and to my way of thinking at least, this means that we can be held accountable to a customer focussed interest – something we strive for but can never succeed at without a reputable independent benchmarking effort.
The second is to point to what may well be a virtualization record for 2009 – the results of the performance testing undertaken by a cloud customer of ours, nicknamed “Project Monterey”, that has delivered what I believe to be a record in performance for a service provider cloud offering for system I/O – using Solarflare IOV NICs, switching from Arista, and XenServer. Monterey is no run-of-the-mill cloud. It is a high end, financial services focussed, performance optimized platform for some of the most demanding workloads in the enterprise. Whilst congratulations are in order to the Monterey team, the point that I want to make relates not to the specific technology or market that Monterey targets, but to the publication by a vendor of their own independently conceived, and arguably highly application specific benchmark results – results that no virtualization vendor could ever reproduce or approve with any authority, and that are credible coming only from the cloud vendor promising an SLA to its customers.
My goal here is to recognize that the virtualization industry is maturing, and that in 2009 it has made very significant strides. The arrival of independently credible, expert led, performance focussed benchmarking by a a channel partner and cloud vendor through putting their brand(s) on the line based on their own credible methodologies is a big deal because it means that our customers – clouds or enterprises – who run our products, can be more confident in their own ability to deliver scalable virtual infrastructure as a result, and to stand by them as a performance commitment to their enterprise users in turn. Tools and techniques to advance this cause are of great value to end users. I’d rather have an end user proclaiming the value of XenServer than have to claim it for myself. And in all honesty I’d rather have independent results to point to, than have to deal with uncorroborated assertions from competitors.
So, perhaps this is a good opportunity to reaffirm a commitment from Citrix in the context of virtualization: We are committed to the open publication of virtualization benchmarks. When we fail to meet a commitment, we’d rather see it written up on the front page of the New York Times, than hidden, because then we will be strongly motivated to address customer needs sooner. We are diametrically opposed to any vendor EULA that limits the customer’s ability to promote understanding of how best to deploy our software, and what its limitations may be. We also wholeheartedly reject the notion that a competitor will provide useful insights into the good or bad aspects of our products.