I spend a lot of my time thinking about how to make remote desktops perform better. Better than HDX.cur_version – 1 and, more importantly, better than the competition. Note the special emphasis on the word “remote” in my opening sentence: I’m specifically thinking about WAN/Cloud scenarios here, often where bandwidth is restricted and latency is noticeable.

With my independent adjudicators hat on (and yes, I know one can argue that I cannot be truly independent, as I work for Citrix) HDX Thinwire is, by far, the most efficient graphics encoding protocol for everyday desktop use. By this, I mean Office-style productivity apps, browsing the ‘web, playing the odd YouTube video, you get the picture… I know Thinwire does, and will cache and compress it for you too (apologies, bad joke!) Anyway, HDX is perfect for this sort of workload, be it LAN or WAN, and I think this is a generally accepted view in the community as well.

For more complex workloads, for example, 3D designey-type-stuff, I’d highly recommend trying the new H.264 build-to-lossless mode introduced in 7.18 (see my recent blog post on this). With that said, I often hear — normally with reference to these sort of workloads — that other (H.264-based) protocols are better than Citrix and less complicated to set up. I’ll be the first to admit that we do have a lot of knobs to turn and buttons to press (which a lot of our admins love, by the way) but with every release, we are getting closer to that “one protocol fits all” idea. Indeed, it’s what I’m working on right now; not a shiny new feature, but rather simplifying our encoder and making it more intelligent so it can figure out what tools to use and when. My goal: The admin should never need to set a graphics policy, and it’s only when we reach that will I know we will have a truly adaptive graphics encoder.

As part of this work, I borrowed a colleague’s VMware’s Blast Extreme environment to see how Citrix compares to Blast in an extreme WAN scenario. The test is quite simple: Set up identical Windows Server 2016 machines with a GPU, put a WAN emulator in between the VDAs and the end-point, and set a harsh 1 Mbps 150ms restriction part way through the test to see how the protocols adapt. You can watch my comparison video below:

One thing that struck me was Blast’s inability to adapt to changing network conditions. In the end, I had to pre-configure Blast (following their tuning guidelines), so that it was optimized for WAN, and even then, it didn’t perform as well as HDX. Moreover, it had a detrimental effect on quality in the LAN part of the test. Also, with my dev. lenses on, the colours generally seemed a bit “off” — especially shades of red — and some odd brightness/contrast effects were noticed when uncovering large parts of the desktop.

HDX, on the other hand, took a few seconds to realise the network conditions had changed, but when it did, adapted its encoding strategy to ensure the user continued to get the best and smoothest experience possible. It certainly felt more fluid and easier to use. Also, it looked great once activity stopped because it sharpened the screen to lossless. There was no significant difference in server resource usage, which isn’t really a surprise given that most of the heavy lifting in this workload is being done on the GPU. For HDX, the only policy that was set was the “Visual Quality” graphics policy, nothing else. Soon, I’m hoping even that won’t be needed. 😉

As usual, I’d love to hear from you if you want to set up and try similar tests, or — even better — if you’re seeing conflicting results! Thanks for your time.