I’m often asked what Citrix and the open source community are trying to achieve with the Open vSwitch Project. The Open vSwitch is an open source virtual switch for Xen (and therefore XenServer, and in future perhaps Amazon EC2 and RackSpace), and KVM based virtual infrastructure that replaces the Linux bridge code with a powerful, programmable switch forwarding capability as well as programmable per-virtual interface ACLs. The Open vSwitch supports an emerging industry standard protocol for programming the forwarding plane from an outside controller. This protocol is called OpenFlow. OpenFlow based virtual switches in each server can be logically pooled into a single fabric by an external distributed virtual switch controller to build a dynamic, multi-tenant, programmable datacenter fabric that supports key innovations in cloud computing, as well as allowing us to take advantage of standard x86 CPUs to run a set of rich edge packet-processing functions to secure, direct, filter and otherwise control the delivery of cloud based applications. With the Open vSwitch in place, the Open Stack open source cloud orchestration layer will be able to exert direct control over the data center fabric to deliver a rich, enterprise ready network layer with powerful controls for security, multi-tenancy, load balancing, monitoring, compliance, charge-back and more.

To understand the need for the Open vSwitch, you have to realize that while CPU virtualization, including hardware support, has evolved rapidly over the last decade, network virtualization has lagged behind pretty badly. The dynamism that virtualization enables is the enemy of today’s locked down enterprise networks. For example, migrating a VM between servers could mean that network based firewall and intrusion detection systems are no longer able to protect it. Moreover, many enterprise networks are administered by a different group than the servers, so VM agility challenges an organizational boundary. What we want to achieve is seamless migration of all network-related state for a workload, along with the workload. The obvious place to effect such network changes is in the last-hop switch – which now, courtesy of Moore’s Law and virtualization, is on the server itself, either in the hypervisor or (increasingly) in smart hardware associated with a 10Gb/s NIC card. The Open vSwitch enables granular control over traffic flows, with per flow admission control, the option for rich per packet processing and control over forwarding rules, granular resource guarantees and isolation between tenants or applications, and enables us to dynamically reconfigure the network state for each VM, or for each multi-VM OVF package, as it is deployed or migrated. Network state for each virtual interface becomes a property of the virtual interface, and as a VM moves about the physical infrastructure, all of the policies associated with the VIF move with it. Suddenly the network team is no longer required in order to move a VM between servers.

The Open vSwitch, answers many of the shortcomings of our original hypervisor bridge code, which grew up from the Linux bridge code, and adds powerful features traditionally found only in dedicated switching infrastructure, such as packet filtering, flow admission control and programmable forwarding. It permits us to take advantage of the incredible price/performance benefits of packet processing on standard CPUs, and the near term addition of so-called Single Root I/O Virtualization (SR-IOV) to the edge packet processing feature set will enable the most profound changes in data center and cloud networking architecture since the invention of the router. Most importantly, the Open vSwitch is open source, and will serve multiple hypervisors. I fully expect the community to make it available as a drop-in replacement for the VMware vDS, and to deliver versions of it for a future release of Hyper-V. This then raises the exciting prospect of an entirely open and programmable architecture for networking in the cloud, that is hypervisor independent. As a result, the richness of both private and public cloud networks (and hence their ability to support a greater proportion of enterprise workloads) will not be hypervisor dependent. Open vSwitch offers the ISV ecosystem an enormous opportunity to innovate in edge networking, free of the constraints of traditional network-appliance centric approaches to application delivery, with new, automated management and control plane functions that simplify, accelerate and ease the management of scalable cloud networks.

From a Citrix-specific perspective, Open vSwitch permits us to dynamically instantiate instances of NetScaler VPX, Branch Repeater VPX, or Access Gateway VPX as value-added networking functions withn cloud based networks, and it will enable us to facilitate the seamless extension of the enterprise network to service provider operated clouds. If, as we expect, the Open vSwitch is more broadly endorsed as a common element of future clouds, with open APIs for dynamic control of the data center fabric, it will catalyze an opportunity for all vendors – including those in the network infrastructure business today – to deliver powerful, secure and differentiated cloud architectures.

Many people wonder if the Open vSwitch is “competitive” with the ambitions of traditional networking vendors or with the Cisco Nexus 1000v virtual switch. The answer is “No – indeed the opposite”: The Nexus 1000v from Cisco provides Cisco customers with a powerful distributed switch architecture that brings the value of the full Cisco edge processing capability to virtualized environments, including Cisco management and toolset support. I would have no hesitation in recommending the Cisco product to Cisco customers. It delivers a value-added proposition on top of the basic concept of a dynamically controllable forwarding plane, very similar to OpenFlow and the Open vSwitch.

It would be easy to implement the Nexus 1000v both in parallel with, or on top of, the Open vSwitch. Indeed the value of OpenFlow has been recognized by one Cisco research group, and HP, Dell and NEC are active participants in the development and use of OpenFlow. Startups, such as Netronome and Solarflare are leading the way toward extensive hardware support of the Open vSwitch, permitting native multi-10Gb/s speed switching on server hardware that also hosts virtualized enterprise workloads.

Open vSwitch can be used to replace the VMware vDS, which is a proprietary, rather prosaic implementation of a modestly richer networking stack for vSphere / vCloud. Unfortunately vDS does not separate forwarding and control plane functions clearly, and therefore limits the ability of the ISV ecosystem to innovate on VMware infrastructure. It is tied to the notion of VLANs as network isolation structure, and provides little in the way of differentiated per-application flow treatment. It also has no mapping onto SR-IOV based hardware functions, and therefore has no clear value in a world where increasingly sophisticated second generation SR-IOV NICs are becoming available, with richly programmable forwarding hardware.

The Open vSwitch is a reminder of the incredible power of open source: It catalyzes the contribution of numerous aligned vendors, commoditizes legacy architectures, accelerates the pace of development, and enables a robust ecosystem of value-added providers to exist around a common core feature set. We can look forward to enabling an ecosystem of many value-added networking vendor products around the (commoditized) forwarding function found in all switches and NICs today.