This is part three in a series of articles on Machine Creation Services (MCS) Storage Optimization (MCSIO).  

The first post in the series—Introducing MCSIO Storage Optimization—gave an overview of the technology and architecture. The second, Reducing shared storage IO by over 90% with MCS Storage Optimization, highlighted how MCSIO can reduce shared storage load data from a series of tests performed. This post shares insights gained from running these tests and from a wider understanding of the technology that may be useful to consider for future deployments of XenApp and XenDesktop.

The tests conducted have successfully shown that, with MCSIO, you effectively eliminate I/O traffic from your shared storage. It has highlighted that the IO characteristics of the MCSIO worker and its redirected IO traffic is dependent on the MCSIO configuration and the size of temporary memory cache.

MCSIO temporary memory cache + temporary disk cache mode configuration

The IO profile of the MCSIO worker in this configuration is dependent on the size of temporary memory cache. Data that can be stored in the temporary memory cache will not consume space on the temporary disk cache, thereby reducing IO operations on temporary storage. The larger the temporary memory cache, the smaller the expected temporary disk cache size and vice versa.

A useful observation is highlighted in the second post, on the 2012R2 tests with 256MB temporary memory cache, we saw an approximate 7% reduction in redirected SUM IOPs compared to standard MCS on shared storage. Increasing the temporary memory cache to 1GB, we see a 77% reduction and at 4GB we see a 100% reduction in redirected SUM IOPs.  The temporary memory cache size would influence the size, capacity and performance need of the temporary disk cache storage.

For a configuration guidance on this MCSIO deployment type, the minimum recommendations are:

  • For VDI workers 256MB temporary RAM cache.
  • For RDS/XenApp workers 2GB temporary RAM cache.

If there is spare memory capacity in your environment, you will benefit from a reduction in storage IO by using this spare capacity to increase the size of the temporary memory cache.

As a general rule, it is recommended that you size your temporary disk cache to at least disk free space plus memory page file.

Temporary cache disk = VM available disk space + VM page file.

For example, if you have a Windows Server 2012R2 machine that has used 40GB of its 60GB disk, with 20GB of remaining free space and a page file of 10GB, you would require 30GB temporary cache disk.

MCSIO temporary memory cache only mode configuration

If running in this configuration, expected IO behavior would be that VM writes would be cached in memory as such there would be no write IOPS to storage.  Reads IOPs would still happen as VMs would still need to read data from where the master image is stored.

If the temporary memory cache is full the device can become unusable, where the system hangs or an error on a blue screen appears.  To mitigate the possibility of this happening, enable temporary disk caching.  You may never use the disk cache, but this overflow will serve to act as a safety net should you run out of memory cache.

MCSIO Temporary disk cache only mode configuration

If using this mode, expect the read and write IOP characteristics to be consistent to that seen on standard MCS only provisioned machines.  However, this comes with the advantage of having the ability to have the write-heavy delta disk redirected to a different storage tier to that of your master image.

Summary

I hope these posts have given you useful insight and guidance into MCSIO and its characteristics.  As always it is advisable to monitor and size your environment workloads to understand the optimal cache size requirements, this would give the best combination of performance verses resource efficiency.

As mentioned in the previous posts MCSIO and PVS technologies share similarities, there are useful PVS blog posts applicable to MCSIO worth reading, in addition to the above recommendations.  Size Matters: PVS RAM cache Overflow Sizing and Turbo Charging your IOPs with PVS Cache RAM with Disk overflow Part 1 and Part 2.

BLOG BANNER