There a lot of discussion going on right now across a spectrum of virtualization technologies. Virtual Machines are the latest craze, which is driving increased interest in server consolidation, as well as sparking interest in virtualizing desktop operating systems in the data center. However, VM are finding their way into all kinds of interesting solutions these days.
I recently went looking for a definition of href=”http://en.wikipedia.org/wiki/Desktop_Virtualization” title=”desktop virtualization” target=”_blank”>desktop virtualization and found that there actually wasn already a Wikipedia.org definition – so I created a definition (please feel free to review/improve it – the Wiki way).
Each of these desktop delivery models has their own unique and specific advantages (and limitations). The most common case is a user simply needing direct access to a single PC machine desktop (at the office or home) a different location over a network (often via the Internet). This works great for a or 1-Few scenarios. When many user desktops need to be hosted and managed centrally (hundreds) or a premised-based solution is required, that where the Shared, VM and Physical Desktop models are increasingly coming into play – a trend we can expect to see accelerate over the coming years.
Whether my definition of Desktop Virtualization withstands the test of time on Wikipedia or not, it clear that these technologies have been around for quite some time in various forms and are here to stay. They also through yet another transformation that being fueled by customer demand and technology availability.
Desktop Virtualization is a growing marketplace, driven by many different factors, including people needs to work remotely for various reasons (e.g., teleworking, mobile sales, outsourcing), and companies needs to maintain control over their computing assets and the intellectual property running on business desktops.
Running user desktops (and applications) centrally can sometimes provide significant value and benefits over the traditional local PC desktop model. By hosting desktops securely within the data center walls, it lowers management costs through centralization (faster updates, less variability, standard images, etc.), along with ability to effectively amortize PC compute power across many users.
There another side benefit that may not be as obvious – workforce continuity and business continuation. Displaced workers need access to their business desktops to work from home or another location after a disaster, and hosting desktops provides some interesting options, assuming most end users have personal PC they can use from home or abroad in such situations. class=”MsoNormal” style=”margin: 0in 0in 0pt”>In addition to VM technology, client PC blades are also helping pave the way for desktop centralization by giving each remote desktop user access to their own dedicated client PC blade – this is great for power users who are accustomed to having a dedicated machine environment.
There a good discussion on the future of virtualization posted on Brian Madden site – well worth the review.
From my perspective, the future looks bright for a whole range of desktop delivery approaches, including desktop virtualization. But what the value in centralizing desktops in the data center and providing remote access?
Well – plenty. First, there the generic value gained by something. When we separate the physical location of the resource (in this case, the desktop) from the user, we add a layer This new layer offers both intrinsic value, as well as an opportunity to add new kinds of value. Let me explain.
By separating the user from the resource (desktop), we enable user mobility and remote access; i.e., the user is no longer tethered to the device containing the resource. This enables the desktop to be accessed over a network connection, providing users with the flexibility to work nearly anywhere, with access to their desktop.
It the closest thing we can get to actually into our desktop from anywhere. A very cool and useful concept – one that has been around in 1-1 PC remote access for many years, through products like PCAnywhere, Windows RDP, Citrix Presentation Server (formerly known as MetaFrame), VNC, and many others. Of course, today this kind of remote desktop access is also available as a hosted service, like those that GoToMyPC and WebEx offer.
The other intrinsic value of virtualization stems from the fact that the desktop can be physically secured and managed within a protected environment. The only way to gain access to it is through a controlled, secure connection one that can be monitored, audited and carefully managed.
Another intrinsic value of desktop virtualization is the ability effectively a desktop across multiple users. Microsoft built multiple user access into workstation operating systems a long time ago, yet there a limited number of use cases where multi-use of a shared workstation actually takes place (it does occur in labs, at clinical workstations, call centers, etc. today).
Another piece of intrinsic value has to do with the PC equipment being absent from a user workspace. In the case of a stock trader, there often insufficient space and/or cooling to collocate multiple desktop PC where the traders all – flying their starships on or near the trading floor. By moving the PC off of the trading floor, it also makes it possible to update and maintain the everything more conveniently (not to mention it being more isolated and secure).
Yet another intrinsic value is the ability to keep sensitive documents, drawings, and other data files within the perimeter of the data center – physically securing these important company assets and intellectual property. By providing selected, authorized access to a centralized desktop, it now possible to provide contractors, suppliers and others within a company value-chain selected access (even admin rights for remote administration that offshored), without giving up the intellectual property and ability to carefully monitor the situation. This is huge for anyone wanting to outsource certain tasks, yet maintain control of their IP and shop.
Beyond intrinsic value, there an opportunity to add new value within the virtualization layer itself. For example, we can provide an added layer of security, more granular control over resources that can be accessed. It also possible to add more intelligence about whether the user is allocated a shared desktop resource (running on Citrix and/or TS), a VM-based desktop image for knowledge workers and developers, or a client blade image for power users.
Of course, the weak link in running things like desktops remotely is typically the network. Networks are subject to tremendous variability, in terms of latency, bandwidth and quality of service. To meet end user expectations, these factors must be managed well (in the usual ways): QoS prioritization and overlay network the interactive traffic gets through in a timely manner, efficient communications between the endpoints ensure bandwidth and latency are mitigated where possible, including compression, caching and screen optimizations (like queuing and tossing, for example).
So, desktop virtualization has been around in various models for many years. It beginning to see a resurgence, driven by customer demand and business value, coupled with technology availability.
What the future desktop virtualization? Well, clearly the hosted service 1-1 model is doing extremely well today, as evidenced by GoToMyPC and others. The shared model has been pr oven over many years and many thousands of implementations on Terminal Services and Citrix Presentation Server (and MetaFrame and WinFrame before it). The emerging VM-based model holds a lot of promise, especially as VM efficiencies CPU power improves. At the same time, costs for the physical model (e.g., client blades) continue to drop, making this option viable for an increasing number of situations.
All of this means that the ability for a customer to choose one of more of these models to address various business needs will continue to flourish and grow. I personally believe Intel and AMD will continue to improve their chipsets by eradicating the biggest barriers to VM efficiency over the next 18 to 24 months (maybe even sooner). At the same time, improvements in I/O throughput efficiency in a VM environment will also arrive.
Competition in the VM arena will continue to increase. The state of the art in virtual machines and hypervisors is happening very rapidly, with both XenSource and Microsoft racing to catch up with VMWare as fast as they can. The increased competition will continue to push the value higher and the costs lower in this area, which is a good thing for the market and customers.
When we consider a 4 core CPU coming next, it certainly feasible to imagine an 8, 16 or even 32 core CPU someday – perhaps within the next 5 to 8 years. Why does the CPU density matter? Because once the I/O and VM overheard limitations disappear, the main ones remaining will be CPU power and memory – and with 64-bit processors, I don see memory as the bottleneck. More CPU horsepower will be needed to increase the user density – the number of users that can be hosted per server on a VM implementation, which correlates directly to TCO. In the meantime, client blades offer a compelling alternative.
The combination of improvements in technology will enable the VM-based model adoption to increase, and could allow it to cross the chasm as a mainstream method for managing corporate desktops by 2009 or 2010. Until the TCO improves, early adoption will continue to drive technology vendors to move faster to address the shortcomings holding adoption back, like most technology adoption cycles where business needs ahead of the available technology.
It should be very interesting to watch as it this space develops and grows over the next few years.