In this new entry in our Getting Ready for Windows Server 2016 blog series, we are going to take a look at the Microsoft Windows Server 2016 Discrete Device Assignment (DDA) feature of Hyper-V, and how Citrix XenApp is leveraging it to enable Shared Graphics Processing Unit (GPU) use cases on Hyper-V.

Why it matters

We spend a lot of time staring at screens; like it or not, it’s just part of life now. Whether you are a high end graphics designer, a call center representative, a student at any age, or even a baby with an iPhone, many people just expect smooth and responsive graphics as part of today’s productive and mobile communications environment. Meeting this expectation is fundamental to user acceptance of any corporate application or desktop delivery solution, and meeting these expectations becomes more critical every day.

Citrix has made Shared GPU / GPU Pass-through, and Virtual GPU (vGPU) capabilities available for a few years now, primarily through implementations based on our XenServer product line.

Now with Microsoft’s introduction of DDA, Citrix XenApp 7.11 and later versions are capable of providing Shared GPU on Hyper-V and Microsoft Azure environments as well.

How it works:

In the environment I am using to investigate these new Windows Server 2016 features and capabilities, I have added an NVidia GRID K2 card that is assigned to my tmx16-xa01 VM.


This GPU is then shared among the user sessions hosted on that particular VM.


As you can see in this screen shot, I am using Heaven 2.0 to stress the GPU and have a comparison of CPU and GPU utilization in the windows on the right.

Configuration of DDA is fairly straightforward if you have the proper hardware.

The steps I used are as follows:

  1. Assess the Hyper-V host graphics hardware to determine if a GPU can be assigned to a VM.


There is a great tool at Github you can use to determine if your system has a graphics card capable of supporting this feature.

You can find it at:

Here is the output of my system, indicating it has detected my NVidia Grid K2 GPU.

  1. Assuming you have an assignable GPU, you will need to release it from the Hyper-V host in order to re-assign it to your virtual machine.
    1. First you need to find your GPU to be re-assigned. I used the following script to list out the GPUs that were installed and working on my host server.


  2. Disable the GPU by using Disable-PnpDevice


You can also disable a GPU from Device Manager (my card had two GPUs)


  1. Next you need to dismount the GPU from your Hyper-V Host server.


  2. Next you need to assign the GPU to your VM using Add-VMAssignableDevice. My VM is “tmx16-xa01”


  1. Upon booting the VM we can now see in Device Manager that the GPU is active for that VM.

    You will need to install the appropriate native driver for your GPU within the VM.

  1. In order for the VM to use the GPU for acceleration, a Group Policy Object needs to be enabled.

Local Computer Policy – Computer Configuration – Administrative Templates – Windows Components – Remote Desktop Services – Remote Desktop Session Host – Remote Session Environment – Use the default graphics adapter for all Remote Desktop Services sessions.


If you are curious to dive deeper into how Discrete Device Assignment works, I recommend reading these technet posts over at Microsoft. They go into more detail around not only GPUs but NICs and other potential devices you may want to assign directly to a Hyper-V hosted VM.

How Citrix XenApp use cases can leverage it:

XenApp leverages Session Host GPU sharing on Hyper-V to offload graphics computations from the CPU, which can increase user density and provide a better end user experience. Both of these are becoming more and more critical as applications hosted on XenApp servers become more graphically oriented. This is becoming even more prevalent in Web and SaaS-like applications where secure browser capabilities are being made available through XenApp. Using XenApp for these scenarios helps embrace SaaS while maintaining the well-established security benefits of Citrix-based application delivery models. Delivering these applications from the secure datacenter, while maintaining optimal per server user density, helps to keep costs in line while rising to the challenge of delivering these emerging applications in a hosted environment.

The performance and on-the-wire efficiencies enabled by the Citrix HDX protocol suite further enhances the user experience over various network and remote access conditions, again keeping relative costs down while meeting end-user expectations.

That’s it for this time.  In my next post I will be looking at some of the opportunities and advantages posed by Microsoft’s Storage Spaces Direct and how Citrix XenApp and XenDesktop “just work” on, and provide additional value for, this storage capability in Microsoft Windows Server 2016.

If you are new to the “Getting Ready for Windows Server 2016” series you can check out my other posts here.

Summit banner