Edited post-hoc. A tad over the top. No offence intended Elias!
I’ve been enjoying a quiet start to the year – twitter and blog-wise that is. I turned on tweetdeck for a bit, but to be honest, I really can’t decipher @Beaker. Is it gibberish, or a secret code controlling an army of cloud-based “RT: @Beaker” bots? (#envy). (If you’re lost, #fail, and skip the next paragraph too.)
OK, I’ll bite, and if you’re looking for a quick response before moving on, this chap is woefully confused.
Let’s take a look. The first thing that strikes me about Elias’s article is that @herrod is highly unlikely to respond to his challenge. Why? Well for starters he uses DVI (Desktop Virtualization Infrastructure) instead of VMware’s term: VDI. Moreover Steve does have a pretty big job on his hands trying to complete Project Redwood.
The weather is far from awful in DVI land, as Elias seems to think. But he is right to point out that it has been bad in the past. His point is that with traditional enterprise storage architectures representing as much as 60% of TCO, Hosted Virtual Desktops just don’t make financial sense. The problem? Well, back in the days when VDI meant something, one would create and store a complete Windows client OS VM per user – their hosted virtual desktop. One would likely use VMFS to “manage” storage, making it impossible for the arrays to understand the structure of the real storage task (virtual disk images). With the storage infrastructure flying blind and unable to assist with placement, caching or read-ahead, performance was terrible, and the only way to solve the problem was to buy more storage, and more expensive SAN networking.
So, all the vendors “ran around hysterically” as Elias says, and started to innovate. There has been a flood of new technology – SSD and RAM based caches, array-based thin clones and snapshots, and lots more to boot. The storage ecosystem has done a fabulous job. We at Citrix have always viewed our role as being one that relies on utilizing as much functionality as possible in the storage infrastructure. We love innovative storage partners. For more than two years XenServer (via StorageLink) has had the ability, for example, to leverage in-array snapshots, thin provisioning and fast-clones. But the demons haunting DV storage have their roots in Moore’s Law. A single modern server can generate more IOPS than any array can satisfy. And technology will continue to favor server IOPS on the road ahead.
So, ultimately the solution lies in a proper decomposition of the DV storage problem into its constituent parts. Properly managed, the user’s desktop is composed of the user’s environment, apps and golden OS, and these can be dynamically composed (using various virtualization technologies) on the fly, to build the user’s desktop. Now, on a server running lots of virtual desktops, why would the hypervisor ever pull the golden image Windows desktop over the network more than once? It wouldn’t – the golden image OS should just be there already, and indeed it ought to be shared across all VMs. Ditto for the apps.
Moreover, when we examined the I/O performance of hosted desktop VMs we found that writes outnumbered reads, by as much as 8:1. The culprit? The Windows Page File. According to Chris Wolf’s analysis, the page file should never leave the server. Instead, it is cached locally either on disk or (better) SSD. Finally, a major cause of write IO latency in the storage subsystem is the nearly random behavior of the disk heads when faced with I/O from a large number of desktops. So we eliminated that, by caching writes locally, and transacting large sequential writes to the storage infrastructure.
This is Intellicache – a feature Elias thinks is cool but irrelevant. Well, he’s wrong. Intellicache reduces HVD IOPS by as much as 98%! What ends up hitting shared storage is .. precisely what you wanted – the user’s differences from the golden image state. He’s right in stating that you can’t use live relo with today’s implementation of Intellicache. Big deal – this is a desktop remember! Moreover, he might want to note that we still manage two platform releases per year.
Elias also thinks that Intellicache is not useful for cloud storage. Dude, have you ever been inside a large cloud? Local storage is all that they use. Intellicache is perfect for “instant on” of any OpenStack based cloud workload. He also says “with all due respect, local disk is dead”. My response: Moore’s Law (and Google, Facebook, and every other massive infrastructure you use daily) says you are utterly, totally, irrationally and profoundly wrong.