You may have read this post about the Top 10 things Citrix Consulting finds when we do an Assessment in the mobility space (and if you haven’t, you should). But things have changed quite a bit in the XenMobile world since 2014, so it got me thinking we might need to refresh the list.

I took a closer look and realized that a number of these big themes are still pretty darn relevant. Sure, the AppC-specific stuff doesn’t factor in when we start talking about XenMobile 10, but we still see things like customers not fully qualifying the mail strategy they go with and wanting to change authentication strategies well into their rollout.

Rather than refreshing the big themes, this post will focus on a few specific configurations and optimizations that many customers seem to overlook. A few of these are even default settings that are just fine when you are talking about small scale or a POC, but they definitely need to be changed if you are planning to go big. Some others factor in regardless of scale. So, without further ado, here is a quick list of some key optimizations you should be sure to think about in your XenMobile environment:

Timeouts: Aside from choosing a mobile mail strategy that doesn’t line up with user requirements, sub-optimal timeouts are the #1 way to kill user experience and slow adoption. I wrote a whole post dedicated to this topic about a year ago with much greater detail. If you think you might not have your timeouts just right, that’s the place to start. In the context of user experience, we see a number of customers not giving themselves enough of a buffer between when their max offline period expires and when the WorxMail Background Services Ticket (STA) expires. This can lead to the perception that users are not receiving mail and notifications when the solution is actually doing exactly what it was configured to do. Just be sure to give the security implications of an overly large ‘grace period’ some thought as well.

Server Properties: There are a few key XenMobile Server properties that really need to be tuned in just about every environment. This is one of those scenarios where the defaults are fine for most small-scale implementations, but if you start going over a few hundred devices, these need to be adjusted:

  • APNS Connection Pool: The default value here is blank, which means 1. If we leave it this way, we will eventually create a bottleneck where XMS can’t efficiently communicate with APNS services. The recommendation here is that we increase this value to 1 for every ~400 devices, or a maximum value of 15. We could get into a whole other post on the why, but the bottom line is that failure to increase this value can result in slow app/policy push to iOS devices and/or slower device registration; all of which can negatively impact UX.
  • c3p0 Max Size: To answer your first question, no, this has nothing to do with the shiny gold droid from Star Wars, though we are going to talk plenty about Droids in a minute :). It does define the maximum number of connections XMS can open to the SQL database. So, this one requires some balance. Set it too high with an undersized SQL box and you could run into resource issues on the SQL side during peak load. Set it too low and you may not be able to take advantage of the SQL resources available. If you are in that 10k+ device range, you are probably going to need to increase this setting to something like 500-600 and make sure SQL can handle the peak load. Bottom line, if you are seeing behavior that seems like it could be related to database performance, this is the first place you should look. It is also important to remember that each node could be opening this number of connections, but it should only open them if it needs them.
  • Heartbeat Interval: This value defaults to 6 (hours) and governs how frequently an iOS device checks in if an APNS notification is not delivered in the interim. If you have a large number of iOS devices in your environment, this can lead to higher load than necessary. Security actions, such as selective wipe, lock, full wipe, etc. don’t rely on this heartbeat, as an APNS notification is sent to the device when these actions are executed. This value does govern how quickly policy updates as a result of AD Group memberships are detected. As such, it is often suitable to increase this value to something between 12 and 23 hours to reduce load.

Android Connection Schedules: The first oversight we see here is that customers commonly forget to configure a Connection Schedule altogether. Not what we want. The second key factor is understanding that in a large Android deployment, this setting has a huge implication on load.

We see way too many customers select a value here arbitrarily without fully examining their management and security requirements. If your organization requires 24×7 control over Android devices ‘Always Connected’ is right for you. If not, going with something less aggressive will save you on XMS and SQL resources. The other key factor is that establishing and tearing down connections is a resource intensive process, so you are actually better off setting the schedule to ‘Always Connected’ over once every hour or two. Somewhere around the 5-6 hour mark tends to be the break-even point. If you do go ‘Always Connected’, make sure you have the Background Deployment and Background Hardware Inventory values tuned to your liking.

We also recently introduced support for GCM, which removes some of these scheduling dependencies. So, if you haven’t brushed up on GCM, this article is worth a read as well and may change your org’s scheduling requirements. If you are like most organizations we work with, you can probably get away with GCM and a ‘long’ scheduling policy, such as the device checking in once every 12-23 hours.

Policy Deployment Schedules: Not to be confused with Android Deployment Schedules, Policy Deployment Schedules can also play a key role on the load front, especially (again) in an environment with lots of Droids. By default, policies will be set to ‘Always Deploy’. Unlike iOS, unfortunately Android does not provide a method to detect whether the policy is already pushed to the device. So, this configuration can lead to the same policy deploying over and over and over again to the same Android device. In some scenarios, such as with Software Inventory, that is okay. In others, this adds little value, lots of extra server/DB load and can actually be intrusive to users. It also makes discerning what is happening on the console side a mess. Think about setting policies to ‘Deploy if failed’ unless there is a compelling reason not to for the policy in question.

There are definitely some other knobs out there in the XM world to tweak and tune, but these are the big hitters that we regularly see lead to massive reduction in server load and/or improved user experience. The best part is that most of them are easy to adjust. Just be sure to test any changes against your specific use case before implementing in production to avoid any inadvertent impact to end-users. You knew you weren’t going to get through this whole post without that disclaimer ;).

Have another setting in mind or a question about the ones mentioned above? Feel free to drop a comment below.

Thanks for reading. And a special thanks to my colleague Jay Guash for all of his assistance in compiling these findings.

Ryan McClure

Enterprise Architect | Citrix Consulting

Citrix_Mobilize Windows_Banner 2_728x90_Static_Compete_F_072715