The following is a blog I posted on my “other” site – Virtualization Pulse, hosted by Tech Target. Most readers on these pages are already very knowledgeable, so please forgive the simplistic view. In the near future, I will publish additional blogs on virtualization and specifically focus on the the healthcare IT space. Consider this one a relatively simple level-set for the audience. Enjoy.
———————————

Given that there are a lot of incentives associated with the adoption of Electronic Medical Records, medical CIOs and their teams are inundated by vendor messages these days. Phrases like “Meaningful Use”, “HITECH”, “HIPAA” are on the forefront of everyone’s mind, but you may also hear about virtualization. Given that there is still some confusion on the topic, I’d like to clear a couple of things up:

“Virtualization” is a term that has been traditionally used in the context of server virtualization. The technology involved is often referred to as “Hypervisors” which basically allow a modern server with plenty of CPU and Memory resources to share those resources between multiple “workloads” or “virtual servers”. So, instead of having one physical server with 16 CPU cores and 128 GB of RAM, this server can often house 40-60+ individual workloads that act on your network just as if they were much smaller individual servers. The benefits are obvious. Today’s servers are relatively cheap to acquire and most server workloads don’t require nearly as many computing resources to do their job. IT departments can lower cost by running fewer physical servers, consume less rackspace, lower power consumption and cooling costs. Advanced virtualization solutions also allow for virtual servers to automatically move to a separate physical host in case of a hardware failure. The failover process is often seamless and therefore provides resiliency, but typically requires a separate, redundant storage area network for this to work on the fly. Workloads with less criticality can be moved in a semi-manual fashion where they are simply restarted on another physical host by the administrator.
Vendors in this space include VMware (vSphere), Citrix (XenServer), Microsoft (Hyper-V) and a number of other players.

Application Virtualization. This is another form of virtualization, which has virtually nothing to do with server virtualization (pun intended). In this model, an application (think about your office productivity suite, or your electronic medical records client) is installed on a central server and executes there. The user connects from their endpoint (PC, laptop, thin client devices, etc.) via a remoting protocol and essentially controls the application remotely. This can be done on the simple level with Microsoft Terminal Services and the RDP protocol, and on the higher end via specialized solutions such as Citrix XenApp (formerly known as Presentation Server or MetaFrame). The benefits are obvious. Applications can be centrally managed and IT support personnel would no longer have to touch an end users system to install or patch an application. All updates are performed on a few centrally located servers. This approach also has the advantage of the application being physically close to the backend data of the app (on a low-latency, high bandwidth network), which leads to faster execution of the app and much increased security as the data never leaves the datacenter. The only information that is exchanged between the end-user’s device and the central server are screen updates and mouse and keyboard events. The protocols also include the capability of conveying information such as audio, printing, USB device support etc. The performance is actually astonishing in many cases and the most demanding customers in the area of engineering run their complex design applications via Citrix XenApp.

Desktop Virtualization. This is the latest and greatest. Instead of executing just a set of applications in the datacenter, the industry is moving towards executing desktop operating systems in the datacenter and allowing users to connect to the desktops . One could write a whole book about desktop virtualization, so I am trying to keep it brief. Some vendors tout a “VDI” or “Virtual Desktop Infrastructure” model, where each user basically has their own, assigned, virtual desktop in the datacenter. This model moves the headache of desktop maintenance to a central location, but still encounters some of the same challenges associated with traditional desktop management (such as the need to patch multiple desktop instances and troubleshoot/fix corrupted or infected desktops).
More advanced models go towards a shared desktop image model, where each user connects to a brand new, pristine desktop operating system, which folds the applications and user settings into the desktop as the user connects. This has the advantage of ensuring the highest performance (after all, a brand new desktop always performs best) and can also cut down on the number of desktops to maintain. Having just one or a handful of desktop master images to patch and maintain for thousands of users provides great efficiency gains and cost savings.

So, let’s recap. Server, Application, and Desktop Virtualization are three distinct disciplines in healthcare IT and are important to understand. Don’t fall for the siren’s song and believe that a particular vendor who is good at one discipline is automatically an expert at the other virtualization disciplines.
Check back on these pages in the near future for my rundown on virtualization techniques for your EMR implementation.

Florian Becker
Twitter: @florianbecker
Virtualization Pulse: Tech Target Blog
Ask the Architect – Everything Healthcare