In my inaugural blog post (which you should check out before reading this follow-up article), I asked this same question about 18 months ago. I’ve always been a big fan of virtualizing PVS and de-bunking the myth that PVS had to be physical, but unfortunately, the answer to that question a year and a half ago was “maybe” or “only in certain situations”. In this article, I’m going to explain why the answer is an emphatic “YES!” (or “almost always”).
Living in a 1 Gb World without LACP Support
Things were different a year and a half ago – most customers were living in a 1 Gb networking “world” and just starting to make the move to 10 Gb networking in their data centers. And even if customers were buying new servers with 10 Gb NICs, many distribution switches or core switches were only capable of pushing 1 or 2 Gb of throughput. To make matters worse, XenServer 5.x and 6.0 did not support LACP (802.3ad). So even if we could push 2 Gb of traffic through a distribution switch or uplink, we could only push 1 Gb of traffic out of a XS network team/bond. Because when you teamed a pair of 1 Gb NICs in XS without LACP, you only got 1 Gb of effective throughput – not 2 Gb. So we really only recommended virtualizing PVS in certain situations – if you were doing a small XA or XD deployment, if you had true 10 Gb networking at your disposal or if you were virtualizing PVS on vSphere (which has had static LACP support for a while). But that was then and this is now…
Living in a 10 Gb+ World with LACP Support
A little over a week ago, we released XenServer 6.1.0. And while the two “big” features being talked about the most seem to be Storage XenMotion and Live VM Migration, I was thrilled to see some of the networking improvements we made under the hood. Specifically, being able to bond or team 4 NICs as opposed to 2 NICs. And formal, official support for LACP! Of course, now that we have these capabilities, I’d estimate that a good half of our enterprise customers are already running 10 Gb in their data centers (so it’s less important for them), but this is huge for the other half of our customers who are still living in a 1 Gb world in some capacity. Assuming your distribution or core switches can actually push 2 or 4 Gb of traffic, we can now bond four 1 Gb NICs together and get 4 Gb of effective throughput with LACP. And 4 Gb of throughput will go a long way when we are talking about network I/O-bound workloads, such as PVS. We used to estimate somewhere in the range of 500 target devices per 1 Gb of throughput when scaling PVS. So now we’re looking at somewhere in the neighborhood of 2000 target devices with a single, virtualized PVS VM on XS. That means supporting our largest XA customers easily with a pair of virtualized PVS VMs! And it also means supporting much larger XD deployments. This is big news.
- 18 months ago, the answer to this question was “maybe” and we really only recommended virtualizing PVS on XS in smaller deployments
- Today, after the recent release of XS 6.1.0, which offers true active-active bonding for up to 4 NICs, the answer to this question is “almost always!”
Senior Architect, Citrix Consulting