That’s right – we just put storage throughput and IOPS officially “on notice“! And in turn (whether we really meant to or not), we made every storage company that got into business over the last ~6 years (since we introduced VDI) to solve the dreaded IOPS problem think twice. Citrix just threw down the gauntlet and it means we can finally stop talking about IOPS. It’s a beautiful day. I am so tired of talking about IOPS and I know storage architects are as well. So…just what the heck am I talking about? How can it be?
Well, in case you’ve been under a rock these last few months, we introduced a new Provisioning Services (PVS) feature called “RAM cache with overflow to disk“. This feature uses non-paged pool memory to cache IO first…and once the memory buffer that you allocate fills up, it simply spills over to a differencing disk that is VHDX-based. Internally we are calling this the “new” write cache option. And all the other write cache options that everyone has been using since we acquired Ardence are considered “legacy” at this point since they are inferior in terms of performance, could cause ASLR issues and they are VHD-based instead of VHDX-based. I am not going to go into all the stats in this article since my colleagues have already done a fantastic job of doing exactly that (please read the links in my bullets below if you have not already!). But I do want to point out a few highlights and thoughts of my own:
- If you thought PVS was dead (or “legacy technology” like VMware seems to think), then you are flat-out wrong. Just wait until you see what we have up our sleeve next in terms of a PVS enhancement (future blog since I cannot comment publicly just yet).
- This feature is an absolute game-changer. Forget everything you know about IOPS – where we were quoting numbers before like “15 avg IOPS per VM”, it’s now more like 0.1 IOPS per VM. Ridiculous performance.
- Don’t get caught up in thinking you need a huge memory buffer for this to work (or thinking you need to buy more RAM). We saw the same mind-blowing results on both XA and XD workloads with very small buffers. As Dan said, start with 256-512 MB for your XD VMs and start with 2 GB for your XA VMs. You’ll still trend towards less than 1 IOPS per user or VM for XA and XD.
- Worried about the spillover to disk or storage capacity (since throughput is essentially a non-issue now!!!)? You shouldn’t be – I was fine with thin-provisioning the drive that hosts the wC file before – and those were nasty 4K and 8K random writes for the most part when using legacy VHD-based wC options. Remember, this new caching method uses the VHDX spec which calls for 2 MB chunks. Very easy to deal with those large block sizes and now an even better candidate for thin provisioning! So make the spillover disk big if you don’t have time to accurately test and monitor usage, then thin provision it and be done with it.
- I already know of 3 large customers using this feature in a production capacity. The numbers are either in-line or BETTER than what DanF has published. Mostly better since the LoginVSI medium workload is still a bit “heavy” in my opinion.
- There is no special edition of PVS you need for this – it is absolutely free with your entitlement of PVS. It is probably the best thing we have given away since Citrix Secure Gateway (CSG) about 10 years ago in my opinion.
- If you are paying for VSAN, thinking about investing in SSDs or looking at a niche storage solution to solve the “IOPS problem”, you need to immediately stop what you’re doing, test this feature and then have a beer to celebrate.
Please remove IOPS from the IT dictionary and your vocabularies – we have bigger things to worry about now. And please upgrade your PVS environments to 7.x and start using this fantastic new (free) feature today. If you do have time to look at your IOPS usage/consumption after you put some XA and XD workloads into production, please drop me a comment below and let me know what you got. I’d love to hear from you and compare notes.
Nick Rintalan, Lead Architect, Americas Consulting, Citrix Consulting