Welcome back to our four-part series on enabling next-generation virtual delivery infrastructures for the modern data center.

If you missed parts one and two, you can find them here and here, respectively.

For this installment, we’re going to discuss how the Sanbolic SDx Platform fits into the Citrix product portfolio.

We’ll Start by Asking the Following Question:

“What if we used the Sanbolic SDx Platform to build a robust, highly scalable infrastructure for delivering desktops, applications and data to any user, located anywhere, on any device, using Citrix XenDesktop and XenApp?”

Rather than building out a XenDesktop/XenApp farm on top of an infrastructure constructed using a particular hypervisor running on particular server hardware connected to a particular storage array, and then repeating the entire process over (and over) again to achieve high availability and/or disaster recovery (since each delivery infrastructure would essentially be independent of one another), let’s build our farm on an infrastructure constructed using multiple hypervisors running on disparate server hardware connected to different storage arrays. No more vendor lock-in, right?

Let’s Continue On Our Fairly Ambitious Journey by Taking This Concept a Step Further.

This time we’ll introduce another set of servers (physical and virtual) from vendors that we haven’t used yet (perhaps located in another data center or maybe even in the cloud), and somehow (think storage virtualization) interconnect the two infrastructures so they appear as one unified infrastructure upon which a single XenDesktop/XenApp farm is used to provision, manage and deliver desktops, applications and data (residing in a single volume accessible by all the servers in both locations) securely and on-demand (based on pre-defined policies) to users of PCs, MACs, Chromebooks, tablets, and smartphones anywhere in the world. Wow! Now that would be really cool!

But wait, let’s not take our innovative caps off just yet. What if we were to take the heterogeneous infrastructure described above, which is built on a highly available and highly scalable architecture, and instead of using external storage arrays, utilized a combination of storage devices such as Flash, SSD and HDD (with utilization optimized through intelligent, automated tiering) installed within the physical servers to create a logical volume whose data would be synchronously mirrored for high availability? The volume — which could be used to store all sorts of data, including virtual machine files, virtual server (XenApp) and virtual desktop (XenDesktop) images, device write cache files, user profiles, file shares, databases required to configure, manage and monitor the delivery infrastructure, as well as data for various other workloads — would have the added benefit of being accessible by all the physical servers, hypervisors and virtual machines making up the virtual delivery infrastructure at the same time.

In addition to simplifying storage and data administration efforts, the hyper-converged infrastructure described above eliminates the need for external storage arrays, FC switches, HBAs/CNAs, and complex cabling systems. This results in a significant reduction in infrastructure CapEx and OpEx, maintenance fees, per-feature licensing costs, and complexity.

Remember what we said in part two of this blog series regarding one, if not the primary goal of IT administrators — extracting the greatest value from IT resources in order to maximize productivity at the lowest possible cost. Can you think of a better way than the above to accomplish this?

Deploying XenDesktop and XenApp on a hyper-converged infrastructure effectively removes customers’ dependency on vendor-specific hardware and enables them to construct a solution using any (industry-standard) server and storage components they choose. What’s more, customers can take full advantage of the underlying architecture’s unparalleled resiliency and scalability to span the solution on-premise (between data centers) or to the public cloud in an active/active configuration for high availability,  load-balancing and disaster recovery of virtual desktops and servers, as well as any other business-critical workloads they choose to run on this extremely robust architecture.

In part four of our series we’ll cover how we’d deploy our hypothetical solution.