Citrix App Layering version 4.1 was released in 2017, shortly after Citrix acquired Unidesk. This version already included connectors for Microsoft Azure that allowed customers to support both on-premises deployments and Azure deployments using the same layers. These connectors have essentially remained the same since 2017, with slight modifications.

With the release of Citrix App Layering 2211, we have rearchitected the Azure Connectors with many significant benefits. Implementation of the new connectors is very different from the original, and in this blog post, I’ll cover these architectural changes and discuss how to set up and use the new connectors.

Architecture Changes

The original architecture of connectors used in App Layering was for the App Layering Appliance to manage and perform the steps necessary to create and update layers. In this architecture, the appliance communicates with the hypervisor and creates a packaging machine used to capture the installation of software and settings into a disk, which would become a layer. To do this, the disks were copied to and from the hypervisor storage. Because the disks can be very large, this process was often time consuming.

In 2019 the App Layering Engineering team developed a new process for connectors where much of the work for creating and publishing layers was offloaded to the packaging machine itself. In this process, a dual boot packaging machine is created, which boots initially into a Windows PE machine to set up the packaging machine. That then reboots and becomes the actual packaging machine used for the software installation. After finalizing the layer, it boots back into the Windows PE machine to finalize the layer and write it back directly to the App Layering Appliance using iSCSI.

This change was a big deal for many reasons. First, it was much more scalable because the packaging process is mostly independent of the App Layering Appliance, and most of the performance bottlenecks were removed from the packaging and publishing process. The new process also greatly simplified how App Layering works with different hypervisor disk types because we weren’t converting from one disk type to another, and we weren’t copying the disks as many times as with the original process. The result? We get better compatibility with hypervisor disk types, and, more importantly, the packaging process is much faster.

The new Azure connectors use an offload compositing architecture that provides the advantages above and lets us provision Gen 2 UEFI-based virtual machines, which wasn’t possible before.

Deploying the New Solution in Azure

These new features were released in version 2211, but to use them with that version, you must deploy the App Layering Appliance within Azure because in 2211 the connectors only work with Azure Managed Identity.  However, in version 2304 we added the ability to use the new connectors with an Azure Service Principal which means that you can still use an on-premises appliance to create layers and publish images to Azure.  I will discuss later in the blog whether it makes more sense to use a single appliance or deploy a new appliance into Azure if you plan to support both on-premises deployments and Azure deployments.

When deploying a new appliance in Azure the first step is to download the new appliance package to a location where you can run a PowerShell script against Azure. I keep a console VM where I run all my scripts, and I used that. Then you will need to create the resources in Azure to run the appliance and packaging process in or allow the appliance creation script to create those resources during the install.

The following are required:

  • An Azure account and subscription
  • A virtual network in Azure (or Azure Government)
  • A network file share (Azure or Azure Government specifics)
  • Azure (or Azure Government) Resource Manager
  • Azure PowerShell v6 – Module “Az”
  • Assigned managed identity for the appliance

For more details, see the Citrix product documentation.

The Appliance download includes:

  • The appliance VHD file
  • The installer PowerShell script
  • A deployment template json
  • The App Layering agent
  • The App Layering OS machine tools

After downloading the package, if you are not going to let the script create them, you should set up a resource group you will use for the appliance and packaging machines. Then give the appliance “contributor” rights on the resource group and the network you will be using for these machines because the appliance needs to create NICs when it creates virtual machines.

The script will create a storage account and container within it to hold the appliance VHD files. It much easier to just let the script create these than to try and pre-create them but you can pre-create the storage account, as well. Then run the install script. It will prompt you for all the information it needs. Remember to install the Azure PowerShell module first and log on to Azure with the appropriate permissions to create the appliance and resources.

Managed Identity

After deploying the new appliance, you must go into the “identity” settings for the appliance and enable Managed Identity. You can use either User Assigned or System Assigned but do not configure both, which will cause issues because the connector won’t know which to use. I used System Assigned. Open the Appliance Virtual Machine and choose “Identity.”

Then assign permission to the appliance for the resource groups and networks you will use for compositing engine machines. If you open the resource group, choose Access Control and Role Assignment. You’ll be able to assign the Machine identity “contributor” permissions.

Remember to also assign contributor permission to the network used.

App Registration

As of version 2304 you can also use an Azure Service Principal to define permissions for the new connectors.  This is the only option available if you are using an on-premises App Layering appliance.  If you are using an Azure based appliance the managed identify option is a better choice as you wont have to worry about managing the secrets used for service principals.

To set up the connectors using a service principal, you go into the App Registrations in Azure and Add a new Registration.  Call it something like Al_Deployment_Connector.  After creating the registration go into the “Certificates and Secrets” section and add a new “Client Secret”  make sure to capture the Secret ID along with the following information from the App Registration Overview Page:

  • Application (client) ID
  • Directory (tenant) ID

This will be required to configure the connectors in App Layering.

Then you have to give permissions to that Service Principal to the Resource Groups and Network you will be using for App Layering as explained in the section  above.

Appliance/Elastic Layering Share

There are many choices for storage in Azure, including cloud-native options like Azure Files and Azure NetApp Files, as well as legacy solutions like just using a Windows server or cluster. The one you choose depends on your organization’s requirements, standards, and how you plan to use the storage. For example, if you don’t plan to use elastic layering, it’s fine to use any Windows server with a share created on it because it will only be used for updates to the App Layering appliance. Or you can use Azure Files, though it is slightly harder to deploy.

If you are using elastic layering and/or full user layers, you will probably want to use Azure NetApp Files or another solution that can also provide continuous availability, which is still in preview for Azure NetApp Files. This feature provides failover between nodes without dropping the connections, which is important because App Layering will just mount elastic layers from the defined CIFS/SMB share and relies on the storage to be highly available. If you choose non-HA storage, be aware that if a node fails, all the users will have to log off and back on to reconnect to a surviving file server node. This is the same as using DFS for higher availability.

Another important design restriction to consider is that with Azure Files there is a per share limit of 2,000 file handles to any resource, which includes the share root, directories, or files. So only up to 2,000 users can connect to an Azure Files elastic layer share at a time because each connection includes a persistent read handle to the share root. If you need to support 5,000 users, you will need to split them up among three shares. If this is just for user layers, it’s easier because in App Layering you can define user layer shares by AD group. If it’s for elastic layering, you will need to split the computers into different OUs so you can apply different registry settings for the path to the EL share using Group Policy Preference. You will also need to use a sync application or robocopy script to keep the primary EL share managed by the appliance in sync with the replicated shares.

Then to define which share a set of VDAs will point to, use this registry settings:

Key: HKLM\SOFTWARE\Unidesk\ULayer
Value: RepositoryPath
Type: REG_SZ
Example: \\ClusterFS\Unidesk

You can download an admx file supporting this setting here.

Azure NetApp files with basic networking has a limit of 1,000 connections to a share, but if you upgrade to “Standard Networking,” ANF can support up to 65,000 connections. This makes the design much easier.

If you do use Azure Files, the App Layering appliance must be configured in a special way to allow it to access the share. The Samba client cannot connect to the share using a normal AD account. Instead, it must be configured as if it was a Linux OS connecting to the share with a SAS token.

To configure this within the Azure portal, access the file shares interface from within the storage account where there is a button that says “connect.” Within that interface, there is a Linux tab. Select that and click on the “Show Script” button. Within that script there is a username (which will match the storage account) and password (which is an Access Key). There is also the path that you would use just to convert the /’s to \’s

Once configured the appliance can access the Azure Files share.

What if I already have an On-Premises App Layering Deployment?

If you are an existing App Layering customer or you need to support both on-premises deployments and Azure deployments then you first must figure out the best strategy for your App Layering Appliance architecture.   As of version 2304 you can use just an appliance on-premises, just an appliance in Azure, or an appliance both on-premises and in Azure.  If you are only deploying new Operating Systems in Azure like Windows 10/11 multisession, then it makes sense to deploy a new appliance in Azure for that operating system, as it can only be used in Azure.  However, if you want to support some of the same Operating Systems that you use on-premises in Azure, with the same layers, then its less straight forward.  If you have the later requirement, it is always easier to use a single appliance to publish images to the secondary hypervisor.  You must consider though that images are very large and transferring them from on-premises to Azure uses significant bandwidth.  I would probably not use that approach unless there was at least 10 Gb/s bandwidth between the on-premises datacenter and Azure. The other approach is to deploy two appliances, one on-premises and one in Azure, then to synchronize layers between them using the import/export functionality built into App Layering.

Synchronizing Appliances

If you have an existing App Layering appliance on premises and you plan to deploy VDAs of the same OS type that you use on premises within Azure, you can choose to synchronize layers between your existing App Layering appliance and the Azure appliance. There is no way to fully automate this process. However it is easy to do using the import/export functionality provided by App Layering. To use this functionality, you will need a CIFS/SMB share either in the on-premises datacenter or in Azure. I prefer on-premises so the exports go quicker.

To perform this process, open the App Layering UI and go to “Layer.” On the top right of the interface, there are three dots. Click on them and choose “Export Layers,” as seen below.

You will be prompted for the share path and credentials to access the share. Then under “Version Selection,” choose “edit selection” and pick the OS layers and App Layers to export. Only choose the latest versions of the OS Layer and App Layers. This will export these layers to the share. Of course, make sure the share can handle the size of all the chosen layers. If it cannot, you can move them in batches. However it’s a much better decision to size the share for all the layers you will synchronize.

Important Note

Do not synchronize platform layers, which can’t be used in Azure. Also, it is important to add a version to any OS layers you will be synching to install the Azure Virtual Machine Agent before copying it to Azure. Without the agent, you cannot access packaging machines in Azure. Also, ensure that RDP works in your OS layer because Azure has no real console session to virtual machines and uses remote desktop for console access.

After you export the layers, log into your Azure appliance and import them. I recommend first importing just the OS layer because there is a time limit on the import jobs and they will stall if the OS layer import takes too long. Once the layers are imported, you can create a new Platform Layer in Azure, then test a published image. Of course, before doing any packaging in Azure, you should read the rest of this post, which covers how to work with the new Azure Deployment Connectors.

Creating OS Layers

If you are not starting with an on-premises appliance, you will want to create your OS layer directly in Azure. This blog is not about that process, but covers the basics including:

  • Creating a gold image VM from the Azure Marketplace
  • Installing the Citrix App Layering Gold Image Tools
  • Configuring the image
  • Importing the image into the appliance as an OS Layer using the provided script

See the following Citrix product documentation for more:

New Connectors

As I covered in the beginning of this post, App Layering 2211 and later now includes new connectors for Azure Deployments. These connectors take a little more setup than the previous Azure connectors, so I wanted to cover below how I would recommend starting with them. You may have different requirements based on your organization’s policies and infrastructure, but this should be a good place to start.

Components

The new connector is very flexible, but that flexibility does make it a bit harder to understand the options available. I will explain what I have learned about each component of the solution, and I will outline how I would configure and use the connectors for most use cases.

The original Azure Connectors are named Azure, Machine Creation for Azure, Azure Government, Machine Creation for Azure Government, and they are now considered deprecated. The new connectors are called Azure Deployment and Machine Creation for Azure Deployment. and they will work for both Azure and Azure Government.

The new connectors work in combination with several Azure components. First, there are several Azure Template Specs you must create for the connectors. There are some Template Specs that are required and some that are optional based on the desired outcomes of the processes. Keep reading to learn more, and check out the Azure deployments product documentation.

Machine

The machine template spec is used to create a virtual machine in Azure. It is a required component of the connector.

If a layered image deployment is also specified in the connector, the resources created by the Machine deployment are deleted after the layered image deployment completes. Otherwise, App Layering does not delete the resources (unless the deployment fails). You would use the layered image deployment when publishing an image, but not when creating a packaging machine.

Cache Disk

The Cache Disk deployment creates an Azure managed disk and it is also required for all the Azure Connectors. This disk is used to contain the Compositing Engine boot image. The App Layering appliance uploads the contents to the disk after it’s created.

If a Boot Image deployment is specified, the resources created by the Cache Disk deployment are deleted after the Boot Image deployment completes. Otherwise, App Layering deletes the resources during cache cleanup.

Layered Image

The Layered Image deployment is an optional deployment type. The resulting resources are the result of publishing a layered image. No particular resource type is required to be created. You can use the Layered Image deployment to produce a compute gallery image, a managed disk, or any other type of resource.

Boot Image

The Boot Image deployment is an optional deployment type. The resulting resources are used to create the OS disks of the VMs created by machine deployments. It isn’t required to create any particular type of resource; however it must create a resource that can be used to create an OS disk for a VM. You can use this deployment to produce a compute gallery image or any other type of resource that can be used as the source of a disk like a disk snapshot. The boot image deployment allows you to create a packaging machine or layered image without a managed disk. Some organization might not want to use managed disks. I have not used this type in the examples below, but you can if you have this requirement.

Resource Groups

When you set up the new connectors, you choose which Azure Template Specs to use and an Azure resource group in which the respective objects will be created. In my lab, I used the same resource group for the packaging machine components and publishing components, including the compute gallery. However I got a tip from the developer of the connectors that it would be better to use separate resource groups for different components because it segregates them and helps if you have to do clean up later. His suggestion is to have separate resource groups for:

  • Cache disk and boot image
  • Machine
  • Layered image

If you use this approach, I recommend a naming convention that defines that they are used for App Layering, and it should be relatable to the above component types.

How do I get this all to work?

You might be asking, “What does all this mean, practically speaking, for how to set up connectors?” If you use the following recommendations, you can set up the connectors quickly, and they will work for most situations. I have divided the configuration up into:

  • Packaging Machine Connectors
  • Publishing Connector for MCS
  • Publishing Connector for PVS

Create the Template Specs

The first step in getting the connectors configured is to add the default templates into Azure. To do this, access “Template Specs “ in Azure and open the documentation for the App Layering templates here. Create a Template spec for each App Layering template type assigned to the same Resource Group you created for your appliance. Choose the appropriate settings, then click “Edit Template” where you can copy the starter template from the documentation and paste it into the template in Azure. You will then have a Template Spec for each type of deployment.

You can customize these templates, but I prefer to do the customization in the connector settings rather than in the template, which allows you to use less templates.

Using a Service Principal (App Registration)

If you are using an Azure Service Principal with the new connectors then there is an extra section to the connector configs shown below where you enter the relevant account information that you saved when creating the Azure Service Principal.

After adding the account click connect.  Then you will be able to access the Resource Groups and Template Specs defined in Azure.  It will also test your credentials.

Packaging Machine Connector

For Packaging Machines for most implementations, use the Azure Deployment Connector with the Machine and Cache Disk templates.

The UI for the new connectors is self-explanatory, but you choose which deployment types you will use. Choose the associated template specs for those, then enter the custom data at the top. The custom data settings will flow into each deployment template and be used if they make sense. Here is my example config:

This will create virtual machines in the App Layering Resource Group to use for packaging applications. If using the standard Template Specs, add the following custom data in the Defaults section at the top of the connector config.

Please note, here is where you define the Resource Groups for the component type.

Custom Data

{
"subnetId": "/subscriptions/ 2c77cc5b-3e72-4gh7-8c11-4he51d485r33/resourceGroups/Network/providers/Microsoft.Network/virtualNetworks/Network/subnets/AzureSubnet",
"licenseType": "Windows_Client"
}

This defines which subnet to use and that you want to use your hybrid use licenses for the packaging machines. To find the subnetId, look at the network properties, then add /subnets/<yoursubnetname>

Publishing Connector MCS

For Publishing an MCS Layered Image, you can use either the Azure Deployments Connector or the Machine Creation for Azure Deployments Connector. The difference is that the Machine Creation connector will start up the published image and wait for it to be shut down by the scripting process on the image. This allows for any defined layer scripts to be run on the published image before using it in MCS.

MCS App Layering supports putting images in the Azure Compute Gallery. This is a great way both to organize images and to replicate them between regions. Here, I assume you will use the Compute Gallery. This creates a new Azure Compute Gallery within the Resource Group you use for the App Layering Appliance and Packaging Machines. You do this within the Azure Compute Gallery Service in the Azure Portal.

This connector will use the Machine, Cache Disk and Layered Image Templates.

Custom Data

{
"gallery": "Al_Azure_Compute_Gallery",
"subnetId": "/subscriptions/xx-xx-xx-xx/resourceGroups/Network/providers/Microsoft.Network/virtualNetworks/Network/subnets/AzureSubnet",
"licenseType": "Windows_Client"
}

To choose the image to deploy in Studio when using MCS with the compute gallery, first select the Azure Resource Group, then the Compute Gallery, then the image and version. You will have different versions as you publish with an image template over time.

Publishing Connector PVS

In PVS, the publishing connector is no different than the on-prem version. The key here is that you will use the Packaging Connector under the Offload Compositing Settings, as seen below.

Plus, when using an Offload Compositing Connector with PVS, you must share the store directly on the PVS server. Some organizations use a Dev PVS server for this integration, then copy the vDisks to the Production PVS Servers while others go directly to Production. I prefer the Dev server approach because it also allows for running publishing scripts against the vDisk before moving the vDisk to a production server. Check out this example of how to use publishing scripts.

Lessons Learned

I am adding this section after I wrote the original blog based on working with the new solution with several customers and in my lab.

In my lab I was cleaning up some disks because I wanted to cut down my bill in Azure, and then I found that I could no longer create a packaging machine or publish an image. What I had done is delete the cache disks used by the connectors. What these disks look like is shown below, where the disk names start with “ALCeBootImage” and they will be stored in the Resource Groups defined in your templates.

We do not automatically recreate these if you delete them.  What you have to do to get them recreated, is to modify the Cache Disk Template in your connectors so that the system thinks it has to recreate them.  What I changed was, removing the licenseType, then save the template, then I went back in and added the licenseType back in again and saved that. After saving a changed template it recreated the cache disks when I next used a connector.

Something else that came up during this time was that a customer wanted to narrow down the permissions assigned to the Appliance Managed Identity because they were not happy giving Contributor rights on the Resource Groups defined for App Layering. The Citrix stance on this is to require Contributor permissions on the Resource Groups used for App Layering. This is because we cannot be sure which permissions will be required based on which settings are defined in the templates. I would highly recommend you configure the Managed Identity permissions using Contributor.

That said, I did go through the process of figuring out more fine-grained permissions. The process to do this is to create packaging machines and images, waiting for the errors from Azure and then adding the required permissions based on each failure to a custom role. I started by giving the appliance the “Virtual Machine Contributor” role on the App Layering Resource Groups defined in the Connectors. This allows the Appliance to create the compositing engine virtual machines. I also gave the appliance the Role “Template Spec Reader” because it has to have access to the template specs.

I then added the following permissions to a custom role I called AppLay:

  • Compute/disks/write
  • Compute/disks/beginGetAccess/action
  • Compute/disks/endGetAccess/action
  • Compute/images/write
  • Compute/images/read
  • Compute/images/delete
  • Compute/galleries/read
  • Compute/galleries/images/read
  • Compute/galleries/images/write
  • Compute/galleries/images/delete
  • Compute/galleries/images/versions/read
  • Compute/galleries/images/versions/write
  • Compute/galleries/images/versions/delete

These allow the appliance to create managed disk, get a SAS token to access them and to write to them and to manage Compute Gallery Images. After these role assignments I was able to package applications and publish images to both PVS and MCS including to a Compute Gallery.

I then created a custom role called AppLayNetwork in the Network Resource Group with these permissions:

  • Network/networkInterfaces/write
  • Network/networkInterfaces/read
  • Network/networkInterfaces/join/action
  • Network/networkInterfaces/delete
  • Network/virtualNetworks/subnets/join/action

These allow the appliance to create and remove network interfaces and to join the subnet.

Lastly if you need to support multiple regions when you publish to a Compute Gallery, you will need to add the following to the Layered Image Template Spec.

"targetRegions": "[if(contains(variables('custom'), 'targetRegions'), variables('custom').targetRegions, createArray(variables('location')))]"

It can go at the end of the “Variables” section as shown below.

Then in the custom data for the connector you would add whichever regions you want to include in the following format:

"targetRegions": ["eastus", "westus","eastus2"]

Conclusion

When my engineering colleagues told me that you would have to use Azure Templates with our new connectors, my fear was that it would be difficult to implement. After digging into the new technology and how they have implemented it, I’ve found it fairly easy to deploy by using the starter templates and the custom data fields in the connectors. They did a great job with the new connectors, and I am sure they will help meet your ongoing needs.

One last note, if you want a more in-depth step by step guide on the new connectors my fellow Citrites Wendy Gay, Gavin Strong and David Pisa created several fantastic blogs on the topic. These are well worth a look.