Preface

January 2011, Citrix released the first service pack for XenClient and you may have your first hand experience installing XenClient (hopefully managed by Synchronizer!) for pilots or even production deployments.

As a administrator or technology person, did you ever ask yourself:

“What the heck are all this vhd files in the storage directory?”
“Which one belongs to which VM and when/why are they being created?”

I’m writing this blog because I did ask myself these questions and so I started to dig into the topic of disk management with XenClient. Understanding this part of the technology will also help you get a clear view on critical operations such as publishing VM’s and how VM updates and backups work in XenClient environments.

XenClient leverages VHD Chain technology  

Virtual Machines on XenClient use Virtual Hard Disk (VHD) file format. The format was created by Connectix, which was later acquired by Microsoft, for what is broadly known as Microsoft Virtual PC. Since June 2005, Microsoft has made the VHD Image Format Specification available to third parties under the Microsoft Open Specification Promise.

One of the cool feature is the Differencing hard disk image format which allows the concept of Golden Image - when enabled, all changes to a hard drive contained within a VHD (the parent image / root disk) are stored in a separate file (the child image/differential disk / leaf). Options are available to undo the changes to the VHD, or to merge them permanently into the VHD.


Part I – XenClient with a local VM created from scratch (not managed)







I’ve created a XP based Virtual Machine with all default settings (80GB Disk).
Remember: XenClient uses thin provisioning, so only the space required will be
taken at the physical hard drive. The disk is reported as drive “C” and has 80Gb space as expected.



A single VHD file on the Dom0 file system is present in this case with a size of approximately 2GB file named: ca3c4762-f455-42ad-9061-9ea9dab36b60.vhd




Below are the disk related entries from the configuration file taken from /config/vms/vm_uuid.db.

     "disk": {
      "0": {
        "path": "\/storage\/isos\/xc-tools.iso",
        "type": "file",
        "mode": "r",
        "device": "hdc",
        "devtype": "cdrom",
        "snapshot": ""
      },
      "1": {
        "path": "\/storage\/disks\/ca3c4762-f455-42ad-9061-9ea9dab36b60.vhd",
        "type": "vhd",
        "mode": "w",
        "device": "hda",
        "devtype": "disk",
        "snapshot": ""
      }

 

Part II – What happens when you “publish” the VM to your Synchronizer?

The process of copying a local VM to the backend (Synchrnozier) is called publishing. An uploaded image can then be assigned to user/group along with a set of policies.

This process will include the following steps:

  • Creating of a snapshot of the VM (The VM will be read only)
  • Creating a writable leaf where the writes / diffs will be stored
  • The read-only VHD will be copied to the Synchronizer





Due to snapshot we now got two VHD files in the /storage/disk directoy:





Also important to understand, the VM will know run from the leaf (ca3c….vhd), the header of that vhd file points to a parent with the uuid cf8c….vhd, which is the golden image (2GB in size).

     "disk": {
      "0": {
        "path": "\/storage\/isos\/xc-tools.iso",
        "type": "file",
        "mode": "r",
        "device": "hdc",
        "devtype": "cdrom",
        "snapshot": ""
      },
      "1": {
        "path": "\/storage\/disks\/ca3c4762-f455-42ad-9061-9ea9dab36b60.vhd",
        "type": "vhd",
        "mode": "w",
        "device": "hda",
        "devtype": "disk",
        "snapshot": "none",
        "disktype": "system"
      }






Part III – What happens when you install a VM from the Synchronizer?

Once an IT organization published a set of images, it’s really easy to just grab a preconfigred VM form the datacenter. XenClient offers to install from a media (CD/DVD) or from Synchronizer, with user data (from Backup) if requested.

This process will include the following steps:

  • Copy the image form the Synchronizer
  • Create a snapshot, so the image is read only   
  • Create a writable leaf for all block level changes (differences)

The Resulting setup in regards to config and VHD files is pretty much the same as with when creating a VM from scratch and publish this VM. XenClient will have two vhd files in the beginning.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Part IV – What happens when backing up a VM to the Synchronizer?
 
Assume you were using a managed VM. Therefore you would have a read only golden image which was downloaded originally and a writable leaf which grew over time when working with the VM. As soon as the backup will be triggered, based on the backup policy or the user initiated the backup, the following steps will be taken:
 

  • Creating a snapshot of the current leaf – This makes this leaf read only
  • Creating of a new leaf where the block level changes will be stored
  • Backup/send the leaf from point one to the Synchronizer
  • After successful transfer, coalesce the parent into the leaf


Part V – What’s different when using dynamic VM mode?

Dynamic image mode (sometimes referred to layered image mode) allows a real single image architecture. The fundamental difference is, that we create three disks during the process. However the underlying technology and process for downloading / uploading / creating backups doesn’t really differ from the descriptions mentioned previously. When a VM is being published in dynamic mode it will result in three disks:

  • System Disk -> OS
  • User Disk -> User profile
  • Application Disk -> Storage for streamed apps

By default the two newly created disk are 40Gb in size – However “thin-provisioned”.

On the System Disk there’s a pointer (junction) to the profile directory of the User Disk (e.g. Documents and Settings). The “All Users” content will be linked to \Program Files\Citrix\XCI\All Users. The setup is slightly different with Windows 7 and Vista, where the profiles are stored in /Users.

Once all is set, we’ll snapshot all disk chains and add a leaf to the bottom of each chain.

NOTE:

On a backup job, we do snapshot and upload the user data disk only.
When updating the VM from the Synchronizer, the user and application disks are unchanged

 
Part VI – How is a VM update changing the VHD’s?

One of the nice features within Syncrhonizer for XenClient is the ability to create multiple versions of an OS Image. If an author is updating the OS of a managed VM, he may choose to upload the block level changes of that update to the Synchronizer. This is a snapshot of a leaf and can be “layered” to an existing image.

If the update needs to be deployed only the snapshot with the block level differences need to be transferred. On a VM update the following steps will be made:

  • Download the update from the Synchronizer
  • Remove the current leaf
  • Snapshot the update to create a new leaf
     
      





Part VII – How do keep track on the VHD cahins?

To be honest you don’t need to! XenClient is managing that for you, so in normal operation there’s no need for additional tools. Information about the image manifest are visible in the config file /config/vms/uuid.db below the “backend” section such as the example below:

       "disk_manifests": [
        {
          "disk": {
            "name": "Windows XP Corp Dyn",
            "version": 1,
            "slot": "hda",
            "goldUUID": "be187610-e21b-4617-8ff7-d68e8b2c00f1",
            "type": "system",
            "flags": 0,
            "config": {
              "device": "hda",
              "mode": "w"
            }
          },
          "image_manifests": [
            {
              "disk_image": {
                "UUID": "aa364cf9-b60b-42cb-83a5-575c4e6e14db",
                "parentUUID": null,
                "repositoryUUID": "00000000-0000-0000-0000-000000000000",
                "filename": "aa364cf9-b60b-42cb-83a5-575c4e6e14db.vhd",
                "isGold": true,
                "desc": "xc disk",
                "cTime": 1295292932.779620,
                "state": "final",
                "instSize": 2040480256
              }
            },
            {
              "disk_image": {
                "UUID": "be187610-e21b-4617-8ff7-d68e8b2c00f1",
                "parentUUID": "aa364cf9-b60b-42cb-83a5-575c4e6e14db",
                "repositoryUUID": "00000000-0000-0000-0000-000000000000",
                "filename": "be187610-e21b-4617-8ff7-d68e8b2c00f1.vhd",
                "isGold": true,
                "desc": "xc disk",
                "cTime": 1295292986.059610,
                "state": "final",
                "instSize": 187646464
              }
            }
          ]
        },

In order to gather information about the chains on the VHD level, there’s a tool in Dom0: vhd-util

The syntax for dumping VHD header information would be: “vhd-util read -p -n file_name.vhd”

In that header you might be interested in fields like Disk Type, Size fields and  Parent UUID. The Parent UUID points to the next vhd file in the chain – the parent of the current one. This alölows you to track the whole chain if neccessary.
 



Appendix

Like with all my posts – EXCUSE TYPOS and language errors (I’m Swiss ). Many aspects of this content is based on analysis with the current code (XenClient V1.0 SP1). With the fast evolution of that great piece  of code, changes are expected so operational modes such as dynamic images WILL likely change during time. I try to update this blog or replace it with a new one, once too much of the information isn’t valid anymore.

ENJOY your work with XenClient and please note: We appreciate your feedback, use the XenClient forums or shot me a message walter.hofstetter[at]eu.citrtix.com.