Citrix has supported GPU-sharing for XenApp for a long while now, for NVIDIA (and other vendors) GPUs, on vSphere, XenServer and Physical servers. This support complements the existing GPU-sharing for XenApp feature based upon GPU-passthrough; whereby one GPU is passed through to one XenApp windows server VM, the sharing occurs in the RDS layer by virtue of multiple users accessing sessions and apps running on that server VM.
- Citrix Technology Professional (CTP), Dane Young: “Adding vGPU support for XenApp workloads will help to enable GPU acceleration for all users and sessions by further driving down the costs when appropriate for the workload. The number of Windows applications benefiting from basic GPU resources continues to grow. As such, GPU and vGPU should be part of every conversation around Windows Apps and Desktops, whether it’s Client or Server hosted. NVIDIA vGPU for XenApp makes this even more cost effective, and a reality for all environments or budgets.”
- CTP, Alexander Ervik Johnsen: ” vGPU support will after this be the industry standard to reach for other vendors. XenApp is also the most widely adopted Citrix product out there, so bringing vGPU support into this will certainly accelerate adoption and help customers deliver more applications to end users”
This new support offers an additional option for those looking for easier image management to run smaller XenApp servers with smaller numbers of user sessions in a XenApp server VM to aid the controlled management on 3D graphical software titles and licenses. This new feature allows one XenApp server VM to access part of a GPU via NVIDIA GRID vGPU technology.
Our Standard Recommendations for XenApp GPU-sharing remain
For the majority of users we continue to recommend the long-standard existing XenApp GPU-sharing feature. Our general advice for GPU-sharing and XenApp is and remains that users should use a GPU with a specification approximately at least as powerful as an NVIDIA K5000, if not an NVIDIA K6000 with GPU-passthrough. This is based on a few facts:
- The average GPU used by a SolidWorks CAD user is a K2000, if you are dealing with apps that use the GPU or CAD you really need something more powerful if you are planning to share it
- Users often want to maximise user density per server with some applications able to support 20-30 users on such a card (see here for some example densities)
- High-end CAD designers on a single workstation often use a dedicated K5000 or K6000 for applications like Dassault CATIA or Siemens NX.
You can read more about supported and certified GPU’s such as the K6000 for GPU pass-through and vGPU, here.
When can I get vGPU for XenApp?
Now, today…. If you have a vGPU enabled version of XenDesktop/XenApp 7.5 or higher that supports vGPU for VDI, it will already work (although please note the requirement for vGPU NVIDIA drivers below). This announcement is simply about formal support. The feature has always been QA (Quality Assurance) tested and is technically the same as Citrix vGPU VDI support for window server OSs.
This is one of the benefits of unifying XenApp and XenDesktop into a single architecture, features from XenDesktop can be easily transferred into XenApp. In fact we were recently able to introduce USB redirection support into XenApp 7.6 as a result of this change. Ongoing, we will continue to blur and blend the boundaries offering the smoothest and broadest continuum of GPU, graphical and HDX solutions.
We had simply chosen not to formally announce support.
What does vGPU for XenApp mean technically how does it differ from XenApp GPU-sharing?
Rather than giving the Windows Server VM an entire GPU via pass-through, the XenApp server VM will get part of the physical GPU via vGPU. The RDS layer on top will add a second sharing layer by seeing that vGPU (part of a physical) vGPU instead of a passed-through GPU. Both vGPU and pass-through allow the application direct hardware sharing of the GPU via the vendor drivers. Neither method involves API intercept not the overhead and liability of synthetic drivers. More information on vGPU and GPU-passthrough can be found in these overviews:
- HP Reference Architecture
- The Virtualization Matrix graphics analysis
- @TeamRGE Graphics for Virtual Desktops Smackdown whitepaper
- Cisco Reference Architecture
vGPU is NVIDIA technology currently only available for Citrix XenServer. GPU-sharing for Citrix XenApp is available for NVIDIA and other vendors’ GPUs; as well as on other platforms e.g. Physical and VMware’s vSPhere/ESXi hypervisors that support GPU pass-through (vDGA).
If you have any questions about which GPU sharing to technology to use – please do ask questions on either the Citrix forums, here; or the NVIDIA forums, here; where you can discuss your plans with other users, and our support and engineering staff.
The application and the server VM, both see a “GPU”, they do not know whether it is a vGPU rather than a GPU on pass-through. Both vGPU for XenApp and GPU-passthrough (GPU-sharing) rely on direct hardware access via the NVIDIA native drivers. No software emulation or API intercept is involved. As such the only application compatibility constraint is for RDS with both XenApp vGPU and GPU-passthrough. As such application compatibility certified by vendors via the Citrix Ready program for XenApp GPU-sharing apply, you find many applications on the Citrix Ready Marketplace, here.
Please take care to use the correct NVIDIA drivers. For XenApp GPU-sharing on NVIDIA GPUs you should use the GPU pass-through drivers. For XenApp vGPU you should use the NVIDIA vGPU drivers. GPU-passthrough drivers (when this article was written) experimentally support CUDA and OpenCL while vGPU does not, see here. You should check the current status at the time of reading.
So it worked, is tested and you didn’t tell us?
Well, I’m afraid to say , yes. As product manager, part of my job is to dictate best practices and to help users avoid getting in a muddle. I’ve seen a lot of confused users and some ill-conceived PoCs with XenApp. Initially, I felt support for this feature. I could encourage users to use it rather than the technically best option. vGPU has a small overhead (a few %) over GPU-pass-through. My initial rational was that users should add additional sessions and users to a server VM to increase user density on a GPU, rather than carve up the GPU to suit the number of users on the server VM.
In particular, I’ve seen a lot of users make mistakes with PoCs, failing to realise that a K1 card has 4 GPUs roughly equivalent to a K600. It’s really designed for supporting PowerPoint, Windows Aero and not 3D or CAD applications. If you compare the specs of a K600 compared to a K2000 you will recognise that this type of card is rarely used by serious GPU loads on a physical workstation ( Let alone an option for sharing between multiple 3D users on XenApp).
So why did you change your mind?
I simply underestimated our customers, a lot of you do understand the technicalities of the issue and presented very good scenarios where vGPU and XenApp made sense. Formerly an engineer, I focused on the technical benefits of increasing user-density via increasing session density (avoiding that tiny % overhead of vGPU vs. GPU-pass-through) without fully accounting for softer factors such as the cost of administration.
As a product manager, you have to be extremely open-minded to the fact you have misunderstood your customers’ needs. And, as such, we regularly engage on our feedback forums and visit customers. It was via these regular touch-base opportunities that my prejudices and assumptions were swiftly corrected. One user described a use scenario in an education scenario (a university) where vGPU for XenApp made perfect sense thus:
I do agree that introducing a second virtualization layer does not necessarily make sense… unless you approach it from an image management perspective. Let’s assume we have 6 x Dell R720 (256GB / 2 x E5-2680v2 / 1 x Grid K2) that will be used to support graphics intensive applications used by several engineering department in our university. My plan was to have 1 x XenApp server per R720 pinned to one socket (directly connected to the K2) with one of the K2 pgpu connected via pass-through. That collection of 6 x XenApp servers would serve the most commonly CAD / CAM applications used by our undergraduates students (Solidworks, NX, Creo, Catia) for example. My initial thoughts were to have 2 x XenApp servers per R720 each sharing the other socket and using a K260Q vGPU. That would give me an extra 12 x XenApp server that I can allocate between different departments (Mechanical Engineering, AeroSpace Engineering, Civil Engineering……) to serve applications that are used by a more limited population of users.
For example the number of students using concurrently AGI Satellite Tracking does not necessarily justify that I have two full K2 pGPU dedicated (two XenApp servers) or that we use 16 x Win7 VMs.
I could of course install STK inside the main pool of XenApp servers. But there is something to be said to not have too many heavy engineering application installed inside the same Windows server image.
At the end of the day I am glad to have options (XenApp + pass-through, vgpu + VDI) and a bunch of hardware (mix of R720 + K2 and older R720 + K1), Our biggest challenge is going to have to learn how to best leverage each technology and properly size our environment based on semester to semester demands.
I’m still a little worried this support might encourage users to use the wrong solution. But, in this case, I think I got it wrong. The need to provide a fully supported solution for our existing users who understand the caveats overrides my caution. One of the nice things about Citrix is that we are positively encouraged to make these U-turns and that a well-considered user forum post will be read and considered.
If you have any doubts, as I envisage the best solution for the majority of users will remain the long proven GPU-sharing XenApp offers, with a proven history for many years – read the IMSCAD case studies to find out more about success for AutoCAD with many published case studies of XenApp usage. If you have any doubts please do ask questions on either the Citrix forums, here; or the NVIDIA forums, here; where you can discuss your plans with other users, and our support and engineering staff.