It’s time to change a best practice. After all, leading practices should evolve. I’ve been saying for years things like “Go Big or Go Home” when it comes to sizing XenApp workloads. Here is some proof from a session I was delivering a few months ago in Thailand to our partners.
And what I mean by that statement is when it comes to sizing XA VMs, you should go as big as you can (within the confines of NUMA nodes or clusters) while ensuring linear scalability. In other words, when it comes to sizing XA workloads, I’ve been preaching to “scale up.” So, if you have a 12 core socket that is split evenly into 2 NUMA nodes, I’d recommend XA VMs with 6 vCPUs. It’s a fairly simple concept – maximize user density and minimize your VM footprint.
But something changed over the last year or so (besides Intel shipping chips like the 8890 with 24 cores in a socket which also should make everyone think twice about scaling up). Our customers started moving workloads to the public cloud. And when our first customer asked me how they should size their XenApp workloads in Azure (i.e. which instance type and associated VM specs to pick), I said I needed to do some research because I wasn’t actually sure if the same rules applied.
I wasn’t sure if scaling up or out was better from a performance and cost perspective. And I’m glad I did that research because the same rules that apply for sizing XA workloads on-prem do not apply when sizing for the public cloud. The new best practice for sizing XA workloads is officially an “it depends” answer again. Consultants rejoice. 😉
But what does it depend on? A few things. I already told you the first one – whether the workloads will reside on-prem or in the cloud. But let’s assume it’s the cloud in this case. The second thing it depends on is which public cloud – Azure, AWS, etc. Since all public cloud vendors have their own secret sauce, different instance types with varying VM specs and cost models, there will inevitably be different “sweet spots” depending on the public cloud selected. And the last items it depends on is the scale of the deployment and mission-criticality of the workloads or apps. So, what are those sweet spots or magic instance types for XenApp?
Luckily a number of folks have already done a ton of testing in this area so we can share some results. Most of these results are based on cost models a few months ago and LoginVSI was used to determine Single Server Scalability (SSS) or the VSIMax. It is important to note these do assume “full load.” So, while you’re mileage could vary, these instance types are a pretty solid starting point (and we’ve also used these instance types and specs on our first few projects). Without further ado:
- Azure: Standard_D2_v2 (2 vCPUs, 7 GB RAM)
- AWS: t2.large (2 vCPUs, 8 GB RAM) or c3.xlarge* (4 vCPUs, 7.5 GB RAM)
So, how the heck did we come up with these instance types when there are about 100 to choose from? Again, it’s all about the “sweet spot” which boils down to the best performance at the best price. I’d encourage everyone to read up and learn more about these results in the associated AWS and Azure whitepapers. The LoginVSI results and economics are quite fascinating.
Now let me try to explain the AWS instance type recommendations. Because on the Azure side, it’s easy – those D2_v2 instances are the best bang for your buck and they are suitable for both test and production workloads. But what’s up with the AWS recs? Well, if we’re doing something small-scale or just have some non-mission critical workloads, then the T2 instances are the best bang for your buck by far. But the T2’s are “burstable” performance instances, meaning you’re guaranteed a certain level of performance and you can burst above the baseline only so often. How often? It’s governed by CPU credits. And I like the t2.large (which isn’t in our whitepaper actually) versus the t2.medium because you get 8 GB RAM vs. 4 GB RAM and you also get a few more CPU credits per hour (36 vs. 24). And the bump in cost is almost negligible. So, that’s what I’d recommend for any non-prod or small-scale environments (20 VMs or less) in AWS.
As for mission-critical workloads or an enterprise-grade XA deployment in AWS, I’d recommend the compute-optimized “C” series. We only have data from the C3 instances in our whitepaper as you’ll see, but I think the C4’s, which are based on the newer Haswell chips (versus Ivy Bridge in C3’s), will be the way to go. It’s just hard to recommend a specific C4 instance right now without any data. But if you look at the C3 data, you’ll see that the c3.xlarge instance is a great sweet spot. And the specs on that are 4 vCPUs and 7.5 GB RAM. One would assume the c4.xlarge instance would be the way to go then (same specs as I just mentioned) but we don’t have the LoginVSI data to back it up just yet.
Circling back to scaling up vs. out, AWS doesn’t prove my point as well as Azure, right? 😉 After all, 4 vCPUs is a decent-sized XA workload. But if you think about 8, 9, 10 or even 12 vCPU XA VMs on some of these newer Intel Broadwell-EX boxes, it actually does start to say something: bigger is not necessarily better. And of course the Azure “winner” at just 2 vCPUs really does prove that scaling out is better than scaling up when you’re deploying XA workloads in the cloud.
Now does that mean you have to scale out or go with these “smaller” VM specs? Nope – you can certainly go bigger and manage less VMs, but you’ll pay more and that’s the sacrifice. And will these instance types always be the winners? Nope – this stuff is rapidly evolving and public cloud cost models are constantly changing. So, we’ll have to continually do our testing (and keep LoginVSI at the ready) and see what wins out every few months.
Anyway, next time someone asks how to size XA VMs I hope you ask them if they’re considering on-prem or cloud (and which public cloud). Because we can no longer provide that robotic reply of “scale up” or “go big or go home” – it just might be best to scale out in this new cloud era.
By the way, if you’ve done similar testing with XA (or XD) workloads in a public cloud other than AWS or Azure, I’d love it if you drop a comment (along with some knowledge) below. Thanks for reading and hopefully this info helps as you make your journey to the cloud with Citrix.