Can XenDesktop be deployed without a SAN? A commonly asked question by customers in the small and medium business (SMB) arena which I recently decided to investigate.
Dan Feller, one of my colleagues has previously discussed the topic when he spoke of deciding on local or shared storage for virtualization. If you have not read his blog, you might want to give it a quick read so some of the things I discuss here will make more sense.
For those of my readers not familiar with Provisioning Services or XenDesktop, I will tell you it is possible using these technologies to stream a virtual disk image to a virtual machine. When this virtual disk is in read-only mode, the writes need to go somewhere, and we call that location the write-cache. Since the write-cache is primarily used for “writing”, SAN storage that supports high write IOPS (Input-Output Operations per Second) is recommended.
For small and medium businesses that want to reap the benefits of XenDesktop, but that don’t have significant capital to invest in a high-end SAN, the use of local storage to host the write-cache drive would remove a significant implementation barrier. In most situations, the IOPS supported by the local storage system is the primary constraint limiting the number of virtual machines that could run on a single host. For a small or medium business that does not require high density, local drive caching would be a viable alternative.
A local storage array of eight 15K disks will supply 1200 to 1500 raw IOPS. Adjusting that for a RAID 10 configuration (which incurs a 2 IOPS write-penalty) and the VDI workload which is typically 90% writes and 10% reads you end up with a functional throughput of between 660 and 825 IOPS. In his blog, Dan provides a convincing set of numbers for specific operations that vary from 4 IOPS (normal working load) to 26 IOPS (Bootup).
Generally, the OS boot up sequence for a desktop can be controlled by bringing up the machines during an off-peak time. The second most intensive workload on the disk is generally considered to be the logon or logoff event with an IOPS requirement of around 15 IOPS. Since logons and logoffs occur more frequently within an environment, planning for that load is more realistic and cost effective.
If using 15 IOPS as the target workload per desktop and using the 8-disk RAID 10 array as described above, the number of desktops supported would be between 44 and 55. Of course, theory is one thing, practice is another. Therefore, the next logical step was to build the environment and test the theory.
My first stop was to borrow a server that would have enough drive bays in it. My old scalability team at the Citrix eLabs loaned me an HP DL380 G6, which supported a maximum of eight local drives. The DL380 had dual Intel Xeon E5520 quad-core processors, 72 GB RAM and 8-72GB 15K drives. To maximize the IOPS available to the write-cache drives, the hypervisor would need to share spindles with the drives hosting the desktops. As long as the hypervisor overhead on the drives was evenly distributed and fairly low, the impact should be acceptable.
Since I only had the server for a short amount of time while it was not in use, I did not have time to run a complete battery of tests against this server. However, during the time I had it, I was able to run two scenarios that provided sufficient data to test my premise. I chose Microsoft Windows Server 2008 R2 Hyper-V for the hypervisor and Windows 7 with XenDesktop 4 for the virtual desktop. For the workload simulator I used Login Consultant’s LoginVSI medium workload.
Since Windows 7 recommends 1 GB of RAM in the workstation, my HP DL380 Server with 72 GB RAM was limited to a maximum of 68 desktops allowing 4 GB for the hypervisor. I ran two tests, one with 64 virtual desktops and one with 68 virtual desktops. On both test runs all the sessions started successfully. The chart below shows the optimal session counts I received back from the LoginVSI tool.
As you can see from the results, in both test runs the LoginVSI optimal session count showed over 50 sessions performing within the 2-second response time parameter. Unfortunately, I did not have time to try out all the scenarios I would have liked to test, but the results are pretty encouraging as they matched my expectations.
The original premise was that eight 15K drives in a RAID 10 configuration would support 44-55 desktops. After running through the test scenario, this theory appears to be confirmed with the number of desktops successfully completing the LoginVSI medium workload ranging from 52-55. We can also conclude that the IOPS formula used (90% writes/10% reads) and the logon load (15 IOPS) appear to be on mark for midrange workloads.
Keep in mind the test I performed was basic and more like a feasibility study. I used the test bed to determine if local storage was a viable option for a small to medium-sized deployment. After running the tests, I am highly confident that Windows 7 XenDesktops hosted on Hyper-V 2008 R2 with local storage would be an ideal solution for the SMB market.