Last week’s blog focused on some of the networking design decisions when deploying XenDesktop 7.1 on Hyper-V 2012 R2. This week I want to go over some of the storage design decisions to consider, and some new storage features in Hyper-V 2012 R2.

There are five storage solutions to consider with Hyper-V 2012/2012 R2:

  • Local storage
  • Direct Attached Storage (DAS)
  • Storage Area Network (SAN)
  • SMB 3.0 File Shares
  • Windows Storage Spaces

Please note that I am referring to storage in general. I’m not going into specific storage strategies like Scale-Out-File-Servers (SOFS), SMB over RDMA, or Cluster-in-a-box (CiB), etc.

Local is the simplest storage solution to implement but provides no redundancy in case of server failure. This type of storage is best suited for Hyper-V servers hosting random (pooled) desktops, or hosted shared desktops. Failover Clustering is not supported, but it isn’t necessary since the user data does not persist with these types of desktops. Live migration is supported which is also referred to as a “shared nothing” live migration. This however does not protect the virtual machines in case of server failure.

DAS functions very similar to local storage, but the difference is that it can be shared among multiple computers if the device provides multiple interfaces for concurrent connections. Since it becomes “shared” storage in that configuration, Failover Clustering is supported. Live Migration is also supported. DAS devices are generally much cheaper when compared to a SAN solution, but it doesn’t scale as well. DAS devices are best suited for small to medium XenDesktop deployments.

SAN is a more complex storage solution to setup, but offers redundancy so that a host failure does not affect the virtual machines. SAN solutions are ideal for medium to large XenDesktop deployments. SANs are best suited for Hyper-V servers hosting static (dedicated) desktops, virtual servers supporting the XenDesktop infrastructure, and when using Cluster Shared Volumes. Failover Clustering and live migrations are both supported with SANs.

SMB 3.0 file shares are a new feature in Windows Server 2012 / 2012 R2. This solution is very similar to a Network Attached Storage (NAS) solution but requires a Windows Server 2012/2012 R2 file server or a non-Microsoft file server that supports the SMB 3.0 protocol. This solution presents a file share as storage to Hyper-V 2012 R2. The benefit of SMB 3.0 file shares as storage is the ability to perform and scale as well as a SAN in many cases, at less cost. Failover Clustering and live migration is supported. This storage is also best suited for Hyper-V servers hosting static (dedicated) desktops, and virtual servers supporting the XenDesktop infrastructure.

Windows Storage Spaces is also a new feature available in Windows Server 2012 / 2012 R2. It can be best described as direct attached or local storage that behaves like a SAN. It works by grouping industry-standard disks or JBODs (just-a-bunch-of-disks) into storage pools. Unlike local storage it supports Failover Clustering, and scales better. Storage Spaces is a low cost solution better suited for small to medium XenDesktop deployments.

Other storage considerations:

  • Which RAID level to use? The RAID level of a storage subsystem can impact the performance of the XenDesktop solution.  RAID levels 1 and 10 offer the best performance for read/write operations but comes at a price with 50% maximum disk capacity utilization. RAID 5 makes the best use of disk capacity, but has a high write penalty. For XenDesktop, RAID 5 or 6 can perform well for all aspects of the environment except for the Provisioning Services write-cache. The Provisioning Services write-cache is highly write intensive so for best performance use RAID 1 or 10 on disks storing the write-cache.

Note: Storage Spaces supports just three resiliency types: Mirror which is the equivalent of RAID 1, Parity which is the equivalent of RAID 5, and Simple (no resiliency) which is the equivalent of RAID 0.

  • How can I optimize IOPS? IOPS can have a direct impact on the performance of disks and is often the cause of poor application performance in XenDesktop. The following can affect IOPS performance and should be taken into consideration when designing the XenDesktop solution:
    • Minimize the size of the user’s profile. Consider roaming profiles with folder redirection, or a solution like Citrix Profile Management which streams the user’s profile as needed.
    • Make sure the storage solution used has been optimized for reads and writes. Many storage solutions have caching systems to speed up performance.
    • Make sure antivirus exclusions have been properly configured for XenDesktop.

For information on optimizing Windows 8.x please see my colleague Amit Ben-Chanoch’s blog on the Windows 8/8.1 Optimization Guide.

  • Should I use thin provisioning? Thin provisioning allows more storage to be presented to the virtual machines than what is actually available in the storage repository. The risk to XenDesktop is when the storage is “overcommitted”. When that occurs the VMs may not function until more space is made available. There may also be delays due to space reclamation after large files are deleted. However the benefits that thin provisioning provides in terms of storage costs outweighs the risks, so it can be safely used with XenDesktop.
  • Should I enable de-duplication? De-duplication can reduce storage requirements so it is recommended on storage solutions that support it. However, the de-duplication process can slow XenDesktop performance. If the storage solution supports it, use a post-process de-duplication method where the de-duplication process can be scheduled to run after-hours, or during periods of low activity.
  • Should I use VHD or VHDX disks for my VMs? VHDX is a new disk format available to VMs on Hyper-V 2012/2012 R2. The new disk format resolves some of the performance and alignment issues that were present with the VHD format and therefore is recommended for XenDesktop VMs on Hyper-V 2012/2012 R2. XenDesktop 7 will automatically use the VHDX format when creating MCS differencing disks, PVS write cache disks, and Personal vDisks.
  • What is the CSV cache and should I enable it? This is a new feature in Windows Server 2012 Failover Clustering that provides caching on Cluster Shared Volumes. The CSV cache uses RAM to cache read-only unbuffered I/O, which improves Hyper-V performance since it conducts unbuffered I/O when accessing a VHD or VHDX file. This greatly improves the performance of Machine Creation Services (MCS) on Hyper-V 2012 R2, since the base VHDX file can be read and cached in memory.  By default the CSV cache is disabled. I recommend enabling it using the steps outlined in the Microsoft blog Clustering and High-Availability.
  • What size should I make the CSV and how many VMs per CSV should I plan for? Unfortunately there’s no magic formula we can use to help answer this due to the number of factors involved.  When sizing the cluster shared volumes the following should be taken into consideration:
    • A larger CSV will have more impact if it were to fail than a smaller one. For example, to reduce the impact to XenDesktop from a CSV failure it may be better to use two smaller CSVs with 50 VMs each, than one larger CSV with 100 VMs.
    • When sizing, don’t just count the number of VMs and multiply the size of the write-cache or differencing disk. Also factor in additional space for snapshots, saved state data, etc. A buffer of 15% – 20% extra storage than the number of VMs is a good rule of thumb.
    • When determining how many VMs to place on a CSV, take into consideration:
      • Operating system on the VMs
      • VM configuration (vRAM, vCPU, disk)
      • Expected workloads
      • Storage hardware (iSCSI, Fibre Channel or SMB, RAID level, IOPS, throughput etc.)

In next week’s blog I will cover System Center Virtual Machine Manager 2012 R2, and its importance to the XenDesktop design.

Ed Duncan – Senior Consultant
Worldwide Consulting
Desktop & Apps Team
Virtual Desktop Handbook
Project Accelerator