In this new entry inĀ our Getting Ready for Windows Server 2016 blog series, we are going to take a look at the Microsoft Windows Server 2016 Discrete Device Assignment (DDA) feature of Hyper-V, and how Citrix XenApp is leveraging it to enable Shared Graphics Processing Unit (GPU) use cases on Hyper-V.

Why it matters

We spend a lot of time staring at screens; like it or not, itā€™s just part of life now. Whether you are a high end graphics designer, a call center representative, a student at any age, or even a baby with an iPhone,Ā many people just expect smooth and responsive graphics as part of todayā€™s productive and mobile communications environment. Meeting this expectation is fundamental to user acceptance of any corporate application or desktop delivery solution, and meeting these expectations becomes more critical every day.

Citrix has made Shared GPU / GPU Pass-through, and Virtual GPU (vGPU) capabilities available for a few years now, primarily through implementations based on our XenServer product line.

Now with Microsoftā€™s introduction of DDA, Citrix XenApp 7.11 and later versions are capable of providing Shared GPU on Hyper-V and Microsoft Azure environments as well.

How it works:

In the environment I am using to investigate these new Windows Server 2016 features and capabilities, I have added an NVidia GRID K2 card that is assigned to my tmx16-xa01 VM.

servers

This GPU is then shared among the user sessions hosted on that particular VM.

heaven_3b

As you can see in this screen shot, I am using Heaven 2.0 to stress the GPU and have a comparison of CPU and GPU utilization in the windows on the right.

Configuration of DDA is fairly straightforward if you have the proper hardware.

The steps I used are as follows:

  1. Assess the Hyper-V host graphics hardware to determine if a GPU can be assigned to a VM.

    survey-dda

There is a great tool at Github you can use to determine if your system has a graphics card capable of supporting this feature.

You can find it at:Ā https://github.com/Microsoft/Virtualization-Documentation/tree/master/hyperv-samples/benarm-powershell/DDA

Here is the output of my system, indicating it has detected my NVidia Grid K2 GPU.

  1. Assuming you have an assignable GPU, you will need to release it from the Hyper-V host in order to re-assign it to your virtual machine.
    1. First you need to find your GPU to be re-assigned. I used the following script to list out the GPUs that were installed and working on my host server.

      listhostgpus

  2. Disable the GPU by using Disable-PnpDevice

    disablehostgpu-grid1

You can also disable a GPU from Device Manager (my card had two GPUs)

disablehostgpu-grid2

  1. Next you need to dismount the GPU from your Hyper-V Host server.

    dismounthostgpu-grid1

  2. Next you need to assign the GPU to your VM using Add-VMAssignableDevice. My VM is ā€œtmx16-xa01ā€

assigngpuxa01

  1. Upon booting the VM we can now see in Device Manager that the GPU is active for that VM.

    tmx16-xa01grid1
    You will need to install the appropriate native driver for your GPU within the VM.

  1. In order for the VM to use the GPU for acceleration, a Group Policy Object needs to be enabled.

Local Computer Policy ā€“ Computer Configuration – Administrative Templates ā€“ Windows Components ā€“ Remote Desktop Services ā€“ Remote Desktop Session Host ā€“ Remote Session Environment ā€“ Use the default graphics adapter for all Remote Desktop Services sessions.

gpoenable

If you are curious to dive deeper into how Discrete Device Assignment works, I recommend reading these technet posts over at Microsoft.Ā They go into more detail around not only GPUs but NICs and other potential devices you may want to assign directly to a Hyper-V hosted VM.

https://blogs.technet.microsoft.com/virtualization/2015/11/19/discrete-device-assignment-description-and-background/

https://blogs.technet.microsoft.com/virtualization/2015/11/20/discrete-device-assignment-machines-and-devices/

https://blogs.technet.microsoft.com/virtualization/2015/11/23/discrete-device-assignment-gpus/

How Citrix XenApp use cases can leverage it:

XenApp leverages Session Host GPU sharing on Hyper-V to offload graphics computations from the CPU, which can increase user density and provide a better end user experience. Both of theseĀ are becoming more and more critical as applications hosted on XenApp servers become more graphically oriented. This is becoming even more prevalent in Web and SaaS-like applications where secure browser capabilities are being made available through XenApp. Using XenApp for these scenarios helps embrace SaaS while maintaining the well-established security benefits of Citrix-based application delivery models. Delivering these applications from the secure datacenter, while maintaining optimal per server user density, helps to keep costs in line while rising to the challenge of delivering these emerging applications in a hosted environment.

The performance and on-the-wire efficiencies enabled by the Citrix HDX protocol suite further enhances the user experience over various network and remote access conditions, again keeping relative costs down while meeting end-user expectations.

Thatā€™s it for this time.Ā  In my next post I will be looking at some of the opportunities and advantages posed by Microsoftā€™s Storage Spaces Direct and how Citrix XenApp and XenDesktop ā€œjust workā€ on, and provide additional value for, this storage capability in Microsoft Windows Server 2016.

If you are new to the ā€œGetting Ready for Windows Server 2016ā€ series you can check out my other posts here.

Summit banner