An automated approach to delivering Citrix Workspaces powered by Citrix Cloud services & Google Cloud Platform
While Azure and AWS have been my primary areas of interest in the public cloud realm, I was exposed to Google Cloud Platform (GCP) about a year ago and I’ve been learning about its competitive differentiation ever since.
To me, when it comes to public cloud offerings, I don’t see customers picking one over the other. Instead, a multi-cloud approach will prevail, delivering workloads on the public cloud that is best-suited for a specific use case and delivering a Workspace to end users that aggregates resources from all these sources. For Enterprise customers, I truly believe Google Cloud Platform offers some unique advantages. For instance, their global network with hundreds of points of presence around the world, the ability to create virtual machines with custom sizes thereby keeping the costs lower, live migration of virtual machines, and competitive pricing (including per-minute billing) to name just a few.
A few months back, I was introduced to a set of scripts that architects from Google and Citrix co-developed to take advantage of the capabilities of Google Deployment Manager. These scripts automate the build out of a Citrix environment in GCP and also tie it to an existing Citrix Cloud subscription. Needless to say, I was intrigued. I decided to try out the scripts and document the process.
Prior to running the scripts, make sure that you have a valid Google Cloud Platform Account and a Citrix Cloud (XenApp and XenDesktop service) account. You can procure a Google Cloud Platform free tier account that gives you a $300 credit for a twelve-month period. You can also obtain a Citrix Cloud trial account by signing up here.
Once you’ve signed up for the GCP free tier account, create a project and make sure you set your default region and zone for Compute Engine from the settings sub section within the GCP console for that project.
You can obtain the automation scripts from the Github repository. You should clone or download this library to a machine with powershell installed.
You should also install the Google Cloud SDK on the same machine. During the installation process, it will also ask you to authenticate to the Google account tied to your Google Cloud subscription. Make sure you use the right Google account during this process.
Once you’ve installed the Google Cloud SDK, open up PowerShell and browse to the folder where you downloaded the Google Clould Deployment scripts from Github. Run the “gcloud init” command. This will allow you to configure your default parameters including the Google account you want to use, the Google Cloud project to build your environment in and also validate the Google compute engine region and zone where the workloads will be built and configured.
If you want to list out all the compute regions available to you, you can type “gcloud compute regions list” and that will give you an output similar to the one below:
Next, login to your Citrix Cloud account, click on the hamburger icon and select “Identity and Access Management”
Within the Identity and Access Management page, select API Access
Create a client by specifying a suitable name and clicking “Create Client”. A client will be created and you will be provided a Client ID and Secret. Also make note of the customer ID, which is also available on the same page.
Now, go back into your PowerShell console (making sure you are in the same folder with the GitHub scripts) and type “.\deploy” . This will kickoff the script and ask you for the CTXSecureClientID, CTXSecureClientSecret, and CTXCustomerID, all of which you would obtain following the steps above from the Citrix Cloud console.
Optionally, if you prefer a “light weight” deployment, you can set the “UseMinimalResources” parameter to True.
“.\deploy.ps1 -UseMinimalResources $True”
This flag will reduce overall resource consumption by removing redundant components. Below are some of the differences:
- Instances will be deployed with public IPs to avoid necessity of NAT instances
- Single domain controller
- Single cloud connector
I did not use the “UseMinimalResources” switch in my deployment.
The script execution will take around 30 minutes.
Once completed, log in to your Google Cloud console to make sure you see two connector VMs, two domain controllers, a XenApp server, a management server, and two NAT servers listed. All the VMs will be in running state except the NAT server. It’s important to note that only the management server has a public IP address assigned to it and acts as a bastion host for you to RDP to. You can RDP to the rest of the private VMs from this host.
Next, within PowerShell, run the “get-domain-admin-password.ps1” and “get-domain-users.ps1” commands to obtain the domain administrator password and the user credentials. The script creates 10 user accounts by default.
Next, log in to the Citrix Cloud console and verify the following:
Under “Resource Locations”, you will find a new resource location with two cloud connectors. It will be named “citrix-on-gcp-######” where ###### is a random suffix.
Under “Identity and Access Management”, you will find a new domain defined named “ctx-######” where ###### is the same suffix mentioned above
Under the Studio console, you will find a new machine catalog (catalog-suffix) and a new delivery group (group-suffix). Make sure that the VDA is in a registered state. In my case, it was unregistered and, upon further troubleshooting, I found out that the VDA had not installed and, to resolve the issue, I had to manually install the VDA on the XenApp host. Remember that you can troubleshoot by establishing and RDP connection to the management server and then using RDP to connect to the appropriate VMs from there.
With that, you’ve verified the deployment. Now, you can login to the workspace URL using the user credentials provided to access applications published via the XenApp VDA in GCP.
Remember that Machine Creation Services is not supported on Google Cloud Platform today for provisioning; it is expected to be made available later this year. If you want to add additional XenApp workers to support more users, you can run the “resize -Workers #” command, where # is the number of additional instances you require. This will cone identical VDA’s and add them to your existing machine catalog and delivery group.
One you are done testing, the easiest way to clean up your environment is to run the cleanup script: “.\cleanup”. This will remove all the compute instances from GCP and also clean up the connectors and resource location, the machine catalog and the delivery group from Citrix Cloud. This process takes less than 20 minutes to complete.
I played around with the script many times. The first couple of times, the deployment did not go as planned. While all the VMs were created in the GCP Compute Engine, I did not see the resource location defined in Citrix Cloud and this also meant that the delivery group and machine catalog was missing. Upon further troubleshooting, this was caused due to me not inputting the CTXCustomerID correctly. I was entering the OrgID within Citrix Cloud and this is not the expected input (it is the customer ID shown within the “API” section of “Identity and Access Management”). It was surprising that the script continued to run in spite of this information being wrong.
I also ran into a situation where the script execution completed and everything looked as it should, except for the fact that my VDA was in an unregistered state. Upon troubleshooting, I found that the VDA installer had never executed on the XenApp server in Google Cloud Platform. So, all the services were missing on the server as you would expect. I was able to resolve this issue by manually installing the Server VDA on the XenApp instance. Once again, I am curious as to why the script did not abort.
For anyone looking to test out Citrix on Google Cloud Platform, but lacking exposure to GCP or just wanting a turnkey method to deploy the workloads, the Google Deployment Manager scripts is definitely the way to go. I was amazed by the simplicity of the whole process, including tearing down the environment.
If you are looking to benchmark performance across various public clouds or looking to conduct a quick POC, then I would highly recommend this approach. That said, you might run into some anomalies as described above, but they are easy to resolve.