I’d like to share an experience that I’ve had at multiple customer sites. Initially I thought it was just a weird thing that only happened to me, but when I discussed this at our sales kickoff event, I was deluged with “that happened at my account too” and was even part of a conversation with customers that have switched from VMware View to XD after they went live. So the goal of this blog is to share some information that I believe people investigating VDI need to know as they gather information in their purchasing process.

A bit of background first.. When View 5 came out VMware launched a PR campaign to try and convince the world that PCoIP, the protocol they license from Teradici has “caught up” to HDX. They said they had overcome the problems with WAN, did a little song and dance in front of the Citrix office, and then referenced a couple of videos (the second by Gunnar Berger) showing PCoIP seemingly outperforming HDX on an iPad by as their end game proof-point.

My reaction then was to release a technical diatribe about why they didn’t actually catch up. However, it’s only now that I realize I failed to address the dog and pony show, that is, I never actually showed how HDX is still better than PCoIP, and so what I failed to stop was the continuous march of these videos into accounts by VMware View reps claiming parity (or even superiority to HDX). As Gunnar will tell you, different video scenarios can be rigged to manufacture specific outcomes. Here’s what I mean:

My point is that we need to look at end user experience and network performance holistically, more than just face value. Shawn Bass and Benny Tritsch have been doing their “VDI Remoting Protocols Turned Inside Out” presentations for a long time now and I think from a deep technical analysis of actual protocol performance they are the gold standard. So I’m not going to use this blog as another fluff piece about who shows what better in a 1:1 scenario. Instead I’d like to challenge you to think bigger:

Let’s imagine that the user experience of 1 user connecting to 1 desktop is at parity, as VMware and Teradici would like you to believe.

In that scenario your next consideration would be bandwidth constraints. If WAN is indeed the last hurdle to be cleared before PCoIP can claim parity with HDX, then we must properly define a WAN.

Most VDI test beds define static WAN conditions that are a formula comprised of Bandwidth (B), Latency (L), and occasionally packet loss (P). So some value like B= 1mb/s, L=100ms, P=0.05% to emulate specific WAN conditions. However, real WAN conditions are dynamic – so it stands to reason that adaptability should be an important metric for determining protocol efficacy.

With this in mind I asked one of our ninjas, Frank Anderson to add an adaptability test to our standard technical marketing regimen. Nothing complicated, just a scenario that would showcase an aspect of HDX technology that is often overlooked in POC’s with static bandwidth conditions. Here’s what he came up with (I’ve fast forwarded to the relevant part, but I encourage you to watch the whole video):

What’s important to note above is two-fold:

  • HDX user experience over dynamic bandwidth conditions is superior to PCoIP
  • HDX still takes up significantly less bandwidth per user on the network than PCoIP

These two points are important to keep in mind when designing a VDI POC because testing user experience at scale is something that I regularly see VMware try to avoid. Their demonstrations focus exclusively on either user experience (some HD video) or scale (how many VDI desktops can I fit on a WAN link) but never both.

Ensuring the existing network has enough capacity to handle the payload is one thing, but the impact of non-VDI bandwidth on VDI user experience is probably even more germane. Though it may seem obvious I rarely see anyone saturate POC test networks with any traffic other than VDI in order to test to impact of non-VDI traffic such as web, mail, file transfer, backups, voice, etc. on VDI end-user experience (let alone the impact of VDI traffic on those workloads – i.e. How does VDI affect e-mail or VOIP?).

User rejection is the biggest reason VDI projects fail. Another proof point for this post came from Gabe Knuth‘s analysis of OnLive’s DaaS service, another virtualization solution that demos well but would likely be rejected in an enterprise environment. It doesn’t matter if you can squeeze hundreds or thousands of desktops on a server or a network if what your end users see is this:

Try getting your work done on this desktop

Therefore it stands to reason that using the wrong protocol could make all the difference between success and failure. Rather than get deeper into the technology and lose what readers I have left, my spec for a FUD-less VDI POC is as follows:

  1. Identify, record and play back several real user workloads
  2. Measure and reproduce dynamic network conditions
  3. Saturate the test-bed network with non-VDI traffic such as HTML, MAPI, CIFS, VOIP, etc. representative of your existing application workloads (unless you plan on having a dedicated network purely for VDI traffic)
  4. Test at scale, the POC should represent 10% – 20% of your pilot (i.e. if your pilot is 1000 desktops try to stage the POC to test 200 desktops)
  5. Test end-user experience and scalability together (don’t be wowed by demos until you see them with hundereds of desktops on your back-end infrastructure and over the WAN)

If there’s one takeaway from this blog it’s this: Your best chance at desktop virtualization strategy is to define a POC that is a representative of your pilot deployment. Don’t let yourself get demo’d out of success.