As part of Citrix’s U.S. Public Sector Professional Services team, I am constantly working with customers that have stringent security requirements. When deploying solutions on public cloud providers such as Amazon Web Services (AWS), customers leverage the ability to easily implement virtual firewall protection using Security Groups between all components.

While this helps to provide additional security, it can complicate implementation if fine-grained rules are implemented and the exact source, destination, and ports requirements are not understood. In this blog post, I’ll detail the network communications required to leverage Machine Creation Services (MCS) to create Citrix Virtual Apps and Desktops instances on AWS.

For all our build engagements, we create a prerequisite document tailored to the customer’s environment. This document details network routing, firewall rules, and other items such as permissions that are required for the Citrix Virtual Apps and Desktops solution to function. Presenting this document ahead of the build process is vital to guarding against project delays. Network connectivity is often a discussion that involves multiple teams inside an organization. Including the details of the technical requirements in our prerequisite document (and their rationale) helps all parties to understand the requirements and assist with the implementation before the Citrix Virtual Apps and Desktops build starts.

I won’t cover the entire MCS process end to end (CTX241160), the ports required for access Citrix Cloud, or the ports required for Active Directory. Those communications are required for all Citrix deployments, no matter if MCS or AWS is involved. I will however discuss all the items that enable MCS to function from the network perspective with respect to an AWS deployment. I will also provide real-world examples of how customers have implemented Citrix Virtual Apps and Desktops service on AWS while blocking direct access to the public internet.

MCS Network Requirements in AWS

To deploy Citrix Virtual Apps and Desktops in AWS, the Cloud Connector or Delivery Controller servers must communicate with the AWS API. The diagram below shows the required communication flow during the MCS process. Before I walk through it in detail, I want to provide an overview of some of the components that are part of the MCS process:

  • XenDesktop Temp Instance: This is a Linux instance created from a Amazon Machine Image (AMI) specified in the Citrix Virtual Apps and Desktops site configuration. It is used as a base image to the “Volume Worker” instance. All the common AWS regions have a default Linux AMI specified, though if one is not available for your region, you can add it with the PowerShell cmdlet in the link above.
  • Volume Worker Instance: This is a Linux instance created from an Amazon Machine Image (AMI) created from the “XenDesktop Temp” instance. This instance is used to prepare machines deployed with MCS.
  • Amazon Elastic Compute Cloud (EC2): The AWS service to deploy virtual machines.
  • Amazon Simple Storage Service (S3): A storage service on AWS that serves as an intermediate storage location during the MCS process.

In this example, I am using the AWS US East-1 API Fully Qualified Domain Names for the AWS API, but they may be different depending on your region. The requirements for Citrix Virtual Apps and Desktops and Citrix Virtual Apps and Desktops service (Citrix Cloud) are basically the same except for the outbound access required by the Cloud Connectors.

One of the most important pieces of information to remember is that the “XenDesktop Temp” and “Volume Worker” instances get their Security Groups assigned by what is chosen during the Machine Catalog creation process. So whatever Security Groups chosen for your Virtual Delivery Agents must also allow communication necessary for the “XenDesktop Temp” and “Volume Worker” instances. If this is not done, you will see an error like the following:

A volume service instance could not be launched in your cloud connection.

Security Groups on AWS are stateful, which means that an incoming rule does not have to be explicitly created to receive data back from outgoing communications. That is why all the arrows in the diagram above point in a single direction and not both ways. Therefore, we never need to configure Security Group rules for the incoming return communication. It is rare that bi-directional communication is required between components. One of the few examples is VDA registration (80 needed in both directions), but there are no examples of this explicitly for MCS.

Now, let’s walk through the network flow in more detail:

  1. “XenDesktop Temp” Instance Created: The Cloud Connector or Delivery Controller machine will communicate with the AWS API defined in the Hosting Connection to create the “XenDesktop Temp” EC2 instance.
  2. Cloud Connector / Delivery Controller Uploads to S3 Bucket: The S3 API is used to upload a Red Hat Package Management (RPM) file. The file will be installed on the “XenDesktop Temp” Linux instance that will facilitate the entire MCS process.Both the on-premises and cloud Citrix Virtual Apps and Desktops offerings source connections to the hosting provider though their respective management servers. For MCS on AWS, this requires outbound access to the S3 and EC2 AWS APIs on port 443 (TCP). Although only one Cloud Connector or Delivery Controller will perform this hosting communication role at a single time, this can change, so all machines must have the required connectivity.
  1. “XenDesktop Temp” Instance Downloads Citrix RPM from the S3 Bucket: The “XenDesktop Temp” downloads the Citrix RPM file from the S3 Bucket. Once the Citrix software is installed on the instance, an AMI is created from which the Volume Worker instance is created.
  2. Cloud Connector or Delivery Controller to Volume Worker Instance: Once the Volume Worker instance has been created, the XDC/CC will communicate to Volume Worker on port 443 (TCP).

How to Reach the Amazon APIs without Direct Internet Access

As I mentioned at the beginning of this post, some customers elect to disable direct access to the internet. That means the route table associated with the subnet does not have a default route (0.0.0.0/0) configured to allow connections to IPs that aren’t explicitly defined. Creating static routes to components on the public internet is not the most scalable solution. Because items such as AWS APIs have many IPs associated with them and are constantly changing, this is not a feasible solution.

Thankfully, Amazon and other cloud providers have a solution. To reach the required AWS APIs without a default route, Amazon enables you to create what they call Endpoints. These are created in the VPC management console and allow you to reach the AWS APIs with the default route removed. For other communication outside of AWS such as Citrix Cloud, we will need to leverage a proxy server that will be able to reach the required services. Keep reading to learn more.

The two AWS API services we need to reach for the purposes of deploying machines with MCS are Amazon Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2). While each of the Endpoints achieve the same goal, they function differently.

  • EC2 Endpoint: The EC2 Endpoint is created as an “Interface Endpoint.” When created, an elastic network interface (ENI) with a private IP address from the subnet range defined in your VPC IP address range will be chosen. This IP will function as a private entry point to the EC2 API. Depending on what region you are in, the regional-specific DNS name will be updated to point to the private IP address. If you are using AWS DNS, it can automatically be updated for you. If you’re using your own DNS server, you will have to update the region specific FQDN to point to the EC2 Endpoint IP address. The EC2 Endpoint can be used by any subnets that can route to it. Best practice is to configure Security Groups to limit access to the Endpoint.
  • S3 Endpoint. The S3 Endpoint is created as a “Gateway Endpoint.” Instead of creating an IP address in your subnet as the EC2 Endpoint, a Gateway Endpoint will update the route table associated with your subnet to allow machines to route to the necessary IP addresses to communicate with the S3 API. When the S3 Endpoint is created, a Prefix List is generated for it. A Prefix List is a collection of CIDR blocks that can be used to configure VPC security groups and route tables. Basically, AWS helps in the process of managing any IP address that the S3 API will live on and ensure that the route table will allow for communication.

Using a Proxy Server on the Cloud Connector or Delivery Controller

It is common practice inside organizations to block direct access to the internet except through a proxy server. From the Cloud Connector or Delivery Controller machine, you can configure the hypervisor hosting connection, which in this case would be the AWS API, to flow through a proxy server. This configuration is separate than the proxy server settings on the Cloud Connector required to reach Citrix Cloud. To establish a hosting connection to AWS through a proxy server, the configuration must be completed through PowerShell as defined in CTX248735.

While the Delivery Controller and Cloud Connectors can be configured to use a proxy server to reach the AWS APIs, the “XenDesktop Temp” instance is temporary and does not support using a proxy for outbound network access. It is recommended to control access to the AWS APIs from the XenDesktop Temp EC2 instance by using a S3 Endpoint, which modifies the route tables. Using a proxy server would not be that beneficial; while not supported, it would likely work if the base Linux AMI used for the creation of the EC2 was modified.

Key Takeaways

I hope this overview helps you to understand the network requirements when interacting with the AWS API to deploy machines with MCS. I want to leave you with a few key takeaways from this post:

  • Validate VDA Security Groups: Security Groups enable a stronger security posture by easily allowing only the communications that are required. If you take advantage of this functionality, ensure that the Security Group rules created for your Virtual Delivery Agents allow the communication necessary for the “XenDesktop Temp” and “Volume Worker” instances (since they all share the same Security Group). This is where most of the MCS failures will occur.
  • AWS Endpoints: Leverage the AWS endpoint features to allow access to the required APIs without the default route in place.
  • Proxy Servers Can Be Used: While you should use endpoints to reach the AWS APIs, a proxy server with the necessary connectivity will be required for communication to Citrix Cloud if the default route is removed.