As promised in my Designing Scalable XenDesktop Farms webinar on 22 September, I have provided answers to the questions I did not have time to address. The full recorded webinar is available on Citrix TV or through the original URL above.
Q: Are there any differences in scalability between hypervisors (Hyper-V, ESX, XS)?
A: This is a very common question. I wish I could give you a clear, straight-forward answer. Generally speaking, the hypervisor with highest desktop density would be considered the most scalable. In projects where I have been involved in running the same workload on the same hardware and varying the hypervisor, the variance between the top and bottom performer is less than 10%. In other words the top performer might support 85 desktops while the worst performer only 80. Furthermore, I have seen results where each of the three hypervisors supported by XenDesktop have been rated at the top, therefore I must conclude the most scalable hypervisor is tied more closely to the workload and operating system than to the hypervisor itself.
To answer the question more directly, I would say at this time they do have slight difference depending on the environment they are used in. I also believe that as the vendors continue to focus on increasing density and as hypervisors become more of a commodity that this variance will continue to decline.
Q: How about 4 socket 8 core machines, are there any experiences?
A: At this time I have not been involved with any testing of quad-socket or 8 core machines. If anyone reading this blog has that type of experience, I would be interested in reading about it as well. Of course, if someone has a project involving this hardware and could benefit from my experience I would be open to helping.
Q: Is there a way to limit how fast idle desktops get turned on?
A: Good question. The pool management service has a setting available in an XML configuration file that lets you throttle the spin-up rate for new desktops. The default rate is set to a maximum of 10% of the desktop group size or 20, whichever is larger. Keep in mind that this is a desktop group limit, so if multiple desktop groups share the same XenCenter, vCenter, or SCVMM host that the limits are combined and may actually overrun your pool management infrastructure. To change the idle pool spin-up rate, modify the MaximumTransitionRate value in the CDSPoolMgr.exe.config file on all the DDC’s using these steps:
|Configuring the MaximumTransitionRate
3. Restart the Pool Management service.
NOTE: The value of 20 for the MaximumTransitionRate is provided only as a reference point. The actual value should be determined for your environment through testing against the hypervisor infrastructure.
Q: The tools sound great but they are customer specific. What happens when you build a multi-tenant data center solution, each customer works very differently to the other?
A: Interesting point. I guess I had not considered that extensively before. The primary challenge facing multi-tenancy is getting the maximum utilization out of equipment without having one customer’s workload affecting another customer’s experience. I guess to start with we would need a good understanding of the different types of workloads that are likely to be encountered. Then using that information use something like LoginVSI to create custom workloads and intermix them against the farm. Finally, leverage monitoring tools for the farm and migrate the workloads to other servers as the load on a server approaches SLA levels.
I believe the best approach may be to use something like Microsoft Opalis to automate tasks such as moving VMs when hypervisor hosts approach full capacity. Opalis integrates with System Center so the SLA information would be readily available and then the task would be to design the workflow to move the virtual machines around.
Q: What is the recommendations about using VLANs?
A: Generally speaking, the recommendation is to dedicate one VLAN to management traffic, one VLAN to storage traffic, one VLAN to the clients, and one VLAN to the hypervisor for events like migration. Sometimes if you have Provisioning Services in the deployment dedicating a VLAN to that traffic also makes sense, especially when the number of clients could saturate the link during a boot storm. Of course, for small deployments, some of the VLANs can be combined depending on the amount of traffic expected on the wire. For instance, combining the management and hypervisor VLANs into a single VLAN would simplify the networking and the management traffic usually coexists nicely with the hypervisor traffic.
One of the first exercises I normally go through after arriving on-site with a customer is to assess their network infrastructure and overlay the XenDesktop design to verify the network has sufficient bandwidth to meet the production requirements. I look at things like the number of hosts on the VLAN, the amount of traffic expected for storage, and if applicable the operating system streaming traffic from Provisioning Services. Occasionally, we have had to redesign the VLANs to prevent saturation.
Q: Will there be a possibility to provide more cores/threads for client OS using XenServer? I know this is more a XenServer than XenDesktop question, but XenServer shows all threads as CPUs and there is CPU-limitation in Win7.
A: I had to reach out to product management in order to get you an answer to this question. It turns out that XenServer (versions 4.1 and later) supports this with Intel chips. Basically you set a parameter to assign multiple cores to a socket. For instance, you can set four cores per socket and then assign eight cores to the VM to essentially give the VM 2 CPUS with multiple threads.
Q: Will the presentation be available?
A: The presentation with my speaker notes can be downloaded from here.
If I didn’t answer your question above it was probably because some context was missing and I was not able to understand what was being asked, feel free to ask again. For others who are viewing the recorded webinar, if after viewing it you have questions, feel free to post them to this blog and I will try to answer them the best I can.