In the previous Provisioning Services with XenApp best practice blog, I spoke about the type of vDisk to use (the feedback was great. So much so that I’ve added new items for future best practices discussions).   If you are going down the route of using a standard image, you must make another decision. This is probably one of the most commonly asked questions regarding Provisioning Services for XenApp environments, where do I place the write cache?  The goal is to select a location that gives you the best performance without sacrificing other important items like fault tolerance or scalability.  What makes this such a challenging discussion is the number of options we get. 

  • Target Device (Physical XenApp or Virtual XenApp) – RAM
  • Target Device (Physical XenApp or Virtual XenApp) – Local Storage
  • Target Device (Physical XenApp or Virtual XenApp) – Shared Storage
  • Provisioning Services – Local Storage
  • Provisioning Services – Shared Storage

Each write cache storage option has many different benefits and concerns, especially for the XenApp workload. For most XenApp environments, the best solution will be the one that takes on the following characteristics:

  • Fast: XenApp requires a solution that responds quickly as XenApp is maintaining live, interactive user sessions. Any delays in the write cache might be noticeable to the users.
  • Dynamic: XenApp servers are delivering many different applications and supporting many different users. Each user and application will have an impact on the write cache. The amount consumed will change day-to-day. The risks of exhausting write cache space would be detrimental to the success of a XenApp environment.
  • Available: XenApp servers must be protected from environment failures, because each server is supporting many users simultaneously. The write cache solution selected should be one that does not impact high-availability options from functioning.

Target Device – RAM

Definition: The first option for write cache storage location is the target device’s RAM. A portion of the target device’s RAM is set aside and used for the write cache.

Benefits: The main benefit for using the target device’s RAM is it provides the fastest type of write cache.

Concerns:

  • A portion of the RAM cannot be used for the server workload. RAM is often better used for XenApp applications or user sessions than for write cache. Plus, using RAM to support the write cache is more expensive than using storage.
  • Part of the challenge with using RAM as the write cache storage is determining the amount of RAM required. Provisioning Services can set aside a certain portion of RAM for the write cache, but what happens when the RAM runs out? The write cache is critical to the stable functioning of a provisioned server. When available write cache is exhausted, the server can no longer make changes, which results in a server failure. If the write cache size is not estimated correctly, using a Target Device’s RAM might pose detrimental to the stability of the environment.

Target Device – Local Storage

Definition: The second option for write cache storage location is the target device’s local storage. This storage could be the physical disk drives on the physical server, or it could be the virtual disk on a virtual server

Benefits:

  • This solution does not require additional resources, in that most physical servers being provisioned already have local disks installed and unused.
  • Although target device local storage is not as fast as RAM, it still provides fast response times because the read/write to/from the write cache is local, meaning that the requests do not have to cross the network.
  • Trying to estimate the size of the write cache is difficult and if done incorrectly, can result in server failure. However, local storage typically provides more than enough space for the write cache, without requiring the administrator to estimate space requirements.

Concerns: If the target device is virtualized, using local storage will prevent live migration processes from succeeding because the storage is not shared across virtual infrastructure servers, like XenServer.

Target Device – Shared Storage

Definition: The third option for write cache storage location is on a shared storage device attached to the target device. This solution is usually only valid for environments virtualizing the target device with a solution like Citrix XenServer. This storage is assigned to each virtual machine from a shared storage repository.

Benefits:

  • Although target device shared storage is not as fast as RAM or target device local storage, it still provides fast response times. If the shared storage infrastructure is a SAN or NAS, the reads/writes will still perform adequately because the optimized shared storage infrastructure will help overcome the time added for traversing the network.
  • Although the configuration of this solution requires the identification of the shared storage size, the costs associated with over-estimating are not nearly as detrimental as overestimating with RAM. Storage costs are significantly cheaper than RAM so a sizeable buffer over the space estimates is of little concern.
  • Because the target device’s storage is accessible from multiple virtual machines, virtual server live migration, like XenServer XenMotion, is viable.

Concerns: This solution requires the setup and configuration of a shared storage solution. However, if XenServer is already being utilized, the same shared storage solution can be used for the write cache storage.

Provisioning Services – Local Storage

Definition: The fourth option for write cache storage location is on the Provisioning Services’ local storage. This storage uses the physical disks installed within the Provisioning Services.

Benefits: This solution is extremely easy to setup and requires no additional resources or configuration within the environment.

Concerns:

  • Requests to/from the write cache must cross the network and be serviced by the Provisioning Services streaming service. Because the write cache is across the network, servicing write cache requests will be slower than the previously mentioned options.
  • The streaming service is responsible for sending the appropriate parts of the vDisk to all target devices. Having the write cache on the Provisioning Services server will negatively impact the server’s scalability because the streaming service must also service the write cache requests.
  • Provisioning Services includes a high-availability option, but in order for this solution to function, all Provisioning Services servers must have access to the vDisk and the target device’s write cache. When the write cache is stored on one Provisioning Services server’s local storage, this makes it impossible for other Provisioning Services servers to gain access, thus denying the ability to enable Provisioning Services high-availability.
  • Although disk space is fairly inexpensive, chances are the Provisioning Services does not have an extensive supply of storage space. With each Provisioning Services server supporting a few hundred target devices, it is quite possible the total write cache could exceed hundreds of gigabytes of storage space. This could result in exhausting all local storage on the Provisioning Services server causing a server failure.

Provisioning Services – Shared Storage

Definition: The fifth option for write cache storage location is on the Provisioning Services server’s shared storage. This option utilizes a share storage solution that is connected to the Provisioning Services server.

Benefits:

  • The shared storage solution allows for Provisioning Services high-availability as each server can access the vDisks and the write cache.
  • Size concerns are mitigated because shared storage devices typically contain significant amounts of storage and can be expanded easily.

Concerns:

  • This is one of the slowest solutions because requests to/from the write cache must cross the network and be serviced by the Provisioning Services streaming service. The Provisioning Services server must then forward the write cache requests onto the shared storage, thus resulting in two network hops for the write cache.
  • Provisioning Services scalability is impacted as the streaming service is responsible for handling Provisioning Services write cache requests and forwarding them onto the shared storage.
  • The solution requires the installation and configuration of a shared storage solution into the environment. If one is already present, then this concern is mitigated.

Best Practice:

Based on the aforementioned criteria and the explanation of the different options for write cache, XenApp servers provisioned with Provisioning Services are best suited for

  • Virtual XenApp Server: Target Device – Shared Storage
  • Physical XenApp Servers: Target Device – Local Storage

Please comment with your thoughts or if there is another best practices you are wondering about. The list has already grown based on feedback from previous blogs. Stay tuned for more upcoming best practice blogs specifically focused on Provisioning Services and XenApp:

  • vDisk Type
  • vDisk Cache
  • Active Directory
  • Application Integration
  • Application Streaming Cache
  • System-level settings: Page file, drive remapping and multiple drives
  • Image Management
  • Local Database Storage (event viewer, EdgeSight, AntiVirus updates)
  • Plus more if we get some good ideas on other areas of focus.

Daniel

Follow Daniel’s Blog: http://community.citrix.com/blogs/citrite/danielf

Follow Daniel on Twitter: http://www.twitter.com/djfeller