I’ve had to have a couple “difficult conversations” (as we like to call them in Consulting) recently with a couple enterprise customers.  In each of these situations, the customer was complaining about poor performance when using solid-state drive (SSD) technology in conjunction with their PVS-based XenDesktop deployment.  Before running any actual tests, I had long speculated that SSDs might not be a great fit for PVS-based XD deployments…and in this article, I’m going to tell you why that is and show you some interesting performance data that ultimately confirmed my suspicions.

Before I tell you why SSDs might not be an ideal fit for PVS-based XD deployments, I think it’s important to look at how far they’ve come and some of the benefits.  Compared to traditional spinning disks, SSDs can potentially offer exceptional performance due to extremely low random access and read latency times.  This means we can handle more IOPS than traditional spinning disks.  Since SSDs have no moving parts, they are quiet, don’t suffer from environmental factors (shock, vibration, etc.) and true mechanical failures are virtually eliminated.  So that’s all nice and good…but what are some of the downsides of using SSDs?

Although the cost of SSDs have come down in recent years, they are still pretty expensive.  Especially higher-end SSDs or Enterprise Flash Drives (EFDs)…that’s a term EMC coined back in 2008 to describe higher performing SSDs in terms of performance, reliability and efficiency.  (And I’m going to be talking about these more expensive and higher performing SSDs in this article unless stated otherwise…that’s what most of these enterprise customers are buying and sticking in their expensive arrays).  Another potential downside of using SSDs is “write endurance” – essentially the number of write cycles which can be performed by any block of flash is limited due to the physics and technology imperfections which eventually make the data storage process unreliable.  The lopsided performance of reads to writes is another potential issue…simply put, due to the way SSDs handle write operations, they just don’t perform that well when it comes to random write IOPS (and this is extremely important for our Citrix-based discussion which I’ll get to in a minute).  So when you see storage vendors quote IOPS numbers, it’s going to be the best case scenario – how fast they can handle sequential read operations (which is insanely high for SSDs as I mentioned earlier!).  But the key to understand is that number might not be the same as what can be sustained in terms of write IOPS.  So in other words, write IOPS are “difficult” and these drives only last so long if we’re constantly writing data to them over and over again.

I should also point out here (since my critics will inevitably call me on this) that endurance has gotten much, much better.  And write performance has also gotten much better recently. Especially in these higher-end SSDs where DRAM is used as opposed to flash memory (which is typically non-volatile NAND memory).  It also highly depends on how the storage charge in a single floating gate transistor cell is interpreted by the logic system (I studied a lot of this stuff in college and used to program EEPROMs so this was a trip down memory lane for me).  I don’t want to get too deep into this, but essentially you can either have flash cells that interpret it as 0 or 1, which is called Single Level Cell or SLC…or you can have memory chips that interpret the stored charge as a range of values (0 to 7, etc.), which is called Multi Level Cell or MLC.  Lower priced drives usually employ MLC, which is slower and less reliable than SLC.  So you’re typically going to find SLC in these higher-end EFDs.  Or you might find MLC, but with a slew of internal design changes within the SSD controller to mitigate the write performance and endurance problems.  I’m talking about things like wear leveling, intelligent garbage collection, over-provisioning and write attenuation – more concepts I don’t want to get into (but feel free to Google because it’s really interesting), but these techniques have been developed over the last several years by brilliant storage engineers that can extend the usable life of an SSD by many orders of magnitude than we could before (and also bring the write performance more in line with the read performance…but some vendors are better than others in this area like SandForce or WhipTail – read the comments at the end of this article for more details).  So the next time you hear that SSDs are “unreliable” or have “poor write performance”, that may or may not be the case…I’d check if it’s flash or DRAM-based, if it employs MLC or SLC and what advanced features or techniques are used within the SSD controller.  Because I’ve seen some cheap SSDs that have truly awful write performance and have already died…and then again, I’ve seen some higher-end EFDs with pretty decent write performance and are quoted (even guaranteed in some cases) to last 5-7 years because of some of the advanced techniques I mentioned earlier.  So it just depends (as us Consultants always seem to say…) on the type of SSD and the technology inside.  There is no substitute for testing.

Now that we’ve covered a little background on SSDs, let’s get back to the Citrix world.  As you might have heard me or others talk about before, PVS-based XD deployments generate around 80-90% write IOPS as opposed to read IOPS.  I’ve even seen this number approach 97% in the steady-state in a real production environment…so lots of write IOPS going to a lot of little “write cache” drives on our storage array in most cases.  And now that you know how SSDs handle write operations and the disk characteristics of PVS, you might start to realize that you’re not going to get the screaming performance that the storage vendor quoted you when you bought these expensive drives.  To back this claim up, I’ve done some testing at two different customers where I compared high-end EFDs to traditional 15k FC spinning disks.  At the first customer we used Iometer and hdparm to test a number of different things (by the way I recommend Iometer to generate multiple, complex streams of data and look at both reads and writes…hdparm is more of a quick and dirty test and is designed more for IDE devices).  But the net-net result for a PVS workload (90/10 write/read) was the RAID1+0 LUN comprised of SSDs was able to handle only about 22% more IOPS compared to the RAID1+0 LUN comprised of 15k FC disks.  This particular customer did this testing BEFORE they designed their XD environment and decided to go with 15k spinning disks since it was almost a “wash” when they did the math.

Another customer I recently worked with didn’t have the luxury of testing beforehand (time and resources as always…) and simply purchased a number of high-end SSDs and stuck them in their CLARiiON CX4 array.  After going into production, I inevitably got the question – “Hey…we’re not seeing that great of performance on our SSD-based LUNs that host the write cache disks…what’s up with that?”.  After engaging both Citrix XenServer Engineering and EMC, we concluded the storage design was optimal (RAID1+0, ~30 VDIs/LUN, etc.).  So we ran a number of tests with Iometer and hdparm…here are the results from hdpam (first with 15k FC disks and then with EFDs):

Results of hdparm with 15k FC LUN
Results of hdparm with 15k FC LUN
Results of hdparm with SSD LUN
Results of hdparm with SSD LUN

And again, I know hdparm isn’t the best tool in the world but we also used Iometer and found the same results – in this case the RAID1+0 LUN comprised of SSDs performed about 30% better than the RAID1+0 LUN comprised of 15k FC disks.  So again, there were performance gains by using SSDs, but it’s certainly not what this customer or the other customer was expecting…especially after getting “sold” on these killer drives that were supposed to deliver insane IOPS numbers.

To summarize, SSDs have gotten much, much better over the years and the prices are coming down which is great.  We’re starting to solve difficult problems like write amplification and endurance.  But they are still expensive compared to traditional spinning disks and SSDs struggle the most with write IOPS due to the nature of how they are built and operate.  And if you’re using PVS in your XD deployment, you’ll be generating almost all write IOPS.  You will likely get some performance benefits from using SSDs, but don’t be surprised if they are marginal.  If we were generating 95% read IOPS, then I’d probably be telling you to run out and buy SSDs because you’d likely get the 500% performance gains that you’ve been quoted by your storage vendor.  But it’s simply not the case here…

So please, please, please – do some testing beforehand, run the numbers and do the math.  If you get a 60% performance increase with SSDs, then maybe it’s worth it for you.  But if it’s just a 10% performance increase, then you might be better off sticking with traditional 15k disks.  You’re going to have to figure that out for yourself because every environment is different.  But just remember – I warned you. 😉

-Nick

Nick Rintalan

Senior Architect, Citrix Consulting