That’s the reaction some of you may get from your network operations team as you deploy VM images across the network from the Synchronizer for XenClient to your XenClient devices.  I know because it’s the first thing I heard from the Citrix Network Operations folks in the process of rolling out  XenClient 2.0 internally at Citrix.  Now that we’ve got XenClient 2.0 out in the field, we’re expecting many more customers will be interested in moving forward with production deployments and network impact becomes a legitimate question.  You can mitigate many of these concerns by taking advantage of a number of features we’ve put into the product.  I’d like this blog to help raise awareness regarding the work done in the XenClient 2.0 release to further optimize user experience and minimize impact to the network.

It’s important to remember that the initial download of the VM image to the XenClient device represents the biggest network impact of XenClient/Synchronizer transactions.  The VM image is stored and delivered as a compressed VHD from the Synchronizer to minimize this initial impact.  As an example, Citrix IT’s 20 GB image compressed down to about 10 GB.  Similarly, subsequent updates of images from the Synchronizer to the XenClient device and backups of images from XenClient to the Synchronizer are stored as compressed VHD deltas, again minimizing impact to user experience and network.  These compression capabilities were present prior to XenClient 2.0, but additional compression improvements were made in XenClient 2.0.  Also for 2.0,  image backup filters were introduced to remove unused blocks, pagefiles, and hibernate files to reduce the size of the update files.

We previously blogged about the user experience improvements, bandwidth reduction and Synchronizer-processing offload advantages a Branch Repeater would bring to XenClient.   The Branch Repeater enables a ‘fan out’ architecture to minimize the impact of deploying managed virtual machines to remote branch offices from a centralized Synchronizer.  Having the VM on the Repeater at the branch location takes the load off of the Synchronizer and the network and allows the BR appliance to handle the bulk of the transfer work while allowing the user to quickly get access to their VM.  I encourage you to read the blog post in its entirety for more details.

The ‘DVD/USB Precache’ of the image to the XenClient device allows administrators to download the master VM image from the Synchronizer to a DVD or USB stick.  From that point, the media can be copied en masse and subsequently distributed to end users with XenClient enabled devices.  Users insert the DVD or USB stick with the VM image on their XenClient device and go to the Citrix Receiver for XenClient (Ctrl-0) –> Select ‘Add VM’ from the top left of the screen –> Select ‘Download From Synchronizer’ –> Enter Synchronizer URL.  From that point, the XenClient system will first look for the VM image on local media (in the DVD bay or USB port).  If nothing is present, XenClient attempts to connect to the Synchronizer URL entered.  If local media is present, the VM is pulled on to the XenClient device.  If the entire image does not fit on the external media, the XenClient device will attempt to contact the Synchronizer for the remaining portions of the image.  In all cases, in the final step, the XenClient device makes a connection to the Synchronizer and is then a managed device — backups can be enabled, updates deployed and XenClient policies can be pushed out.  In addition to minimizing the impact to the corporate network, this feature is also useful for remote workers that may not have robust Internet connections, though admins might also consider using this approach for users on the LAN.

Although not a feature, a best practice approach to consider in for image rollout is a staggered deployment of VM images to users where you deploy the initial VM image to your pool of users across several days or weeks to minimize the impact to your networks.  You might consider staggering VM deployment across employee departments or Active Directory groups as examples.

Finally, the Admission Control feature, while strictly speaking is not a feature used to minimize network impact, is a useful way to provide an improved download experience and prevent overloading the Synchronizer with client connection requests. This is a feature that limits the number of simultaneous connections allowed to the Synchronizer at a given time.  If all the Synchronizer connections are accounted for, the next user is provided a message to try again later as opposed to having to deal with inordinate download times once they’ve kicked off their download.  This approach also helps to limit the impact to the Synchronizer.  This feature is configured by the administrator on the Synchronizer.  By default, the number of connections is set to 25 users but can be administratively dialed up or down or set to ‘Automatic’. Automatic will set the number of connections based on Synchronizer memory available.  This feature is found in the Synchronizer VM management screen on XenServer (not in the Synchronizer web management interface).

I hope this has given you some ideas of how to have an informed conversation with your network operations team to minimize their concerns as you plan your deployment of XenClient devices and the Synchronizer for XenClient.  If you have other feature suggestions to minimize network impact or improve user download times, please let us know.