There is an interesting change afoot in virtualized networking. Both VMware and XenServer are boasting richer network stacks, with the goal of simplifying the management of policies (firewalling, QoS etc) for each virtual interface in a dynamic virtualized world.

In our case, the new Open vSwitch has been in beta for a while and will ship in our next release, Cowley. Changing out the networking stack in a virtual infrastructure platform is a big deal, because it is a mission-critical feature set. If, for example, an administrator could screw up their network config and leave the server unreachable, perhaps thousands of miles away, that would be rather unfortunate. So, given the extensive development and testing requirements for the vSwitch and its consequences on other optimizations and enhancements in the networking stack, the XenServer performance team had to place a couple of bets. Since the first release of the vSwitch would almost surely need some performance tweaking, by comparison with the existing legacy stack, and since tuning the legacy stack was no longer relevant, we placed an early bet on SR-IOV.

SR-IOV (Single Root I/O Virtualization) (primer) is a PCI SIG Standard that enables a converged network adapter to offer a virtualization-safe hardware virtual function (VF) as device that can be directly assigned to a virtualized guest. The guest runs device drivers for the hardware, and there is no hypervisor penalty for I/O, since the guest interacts directly with the device itself. We’ve been reporting incredible performance results using SR-IOV compliant devices for a couple of years, but the SR-IOV feature was not really relevant until a significant number of vendors began to support it. With SR-IOV becoming mainstream, XenServer 5.6 includes SR-IOV support, but you have to use the CLI to configure it. Nonetheless, this was a landmark release since we were the first vendor to offer a standardized framework for support of this important capability. We launched this with Dell at Intel IDF in the fall of 2009, a full six months ahead of any competitive product. This capability was first to market as a result of significant contributions of technology to Xen by Intel, the project’s key development partner.

Why the haste? Well, toward the end of last year we were already facing the need to scale XenDesktop to manage tens of thousands of virtual desktops for our largest customers. Unfortunately, booting hundreds of VMs across a network can cripple performance, and even Citrix Provisioning Services upon which XenDesktop relies to scale storage and boot performance, could not be relied upon to scale adequately. However, with SR-IOV support in XenServer 5.6, Provisioning Services can now run with native performance, but as a VM. Some performance charts for the CPU utilization of XenServer 5.6 running PVS as a VM, both without and with SR-IOV support, while booting 600 hosted virtual desktops, are shown below. Each VM boots in a couple of seconds.

Host CPU Utilization while streaming 300 desktop VMs, SR-IOV OFF


In the tests above we used the Intel Niantic 10GbE card.

I should also point out that the support of SR-IOV is critical for Citrix NetScaler VPX, the fully featured virtual appliance version of Netscaler, that can be dynamically provisioned into a private or service provider cloud to provide all the benefits of load balancing, protocol offload, app firewalling, PCI compliance and more. With an initial supported maximum performance of 3Gb/s per virtual appliance, Netscaler needs the horsepower of SR-IOV to be able to meet the requirements of demanding cloud service providers. With the Intel chipset we can easily drive performance for a VPX VM above 25 Gb/s, and our internal test results indicate that we can max-out the I/O controllers on any server, with many tens of Gb/s throughput, when we use SR-IOV. I will be demonstrating this at Synergy Berlin.

But the astute reader will have noticed a puzzling challenge that SR-IOV raises for the virtualization vendors: We have a great software virtual switch capability now, but SR-IOV enables a vNIC to be directly plumbed through into each guest, bypassing the rich policy controls of the vSwitch. Oops! Well, I’m happy to report that a version of the Open vSwitch that supports Intel Niantic SR-IOV is well under way. Our goal is to have precisely the same feature set for a software virtual switch as for a rich, programmable SR-IOV enhanced virtual switch. This requires that the NIC support an ability to implement policies per virtual function, to give us granular control per vNIC. The Niantic card is one such, and will shepherd in the next quantum leap in virtualized I/O performance.