The following blog post covers a product configuration or procedure which Citrix does not currently offer support for. Use of this configuration should only be used in a lab or test environment and not with production deployments. The author is actively seeking feedback on the potential of implementing support for this configuration, but the form any level of support takes has yet to be determined.
As we scope technologies and evolve our architectures we often include the framework for potential new features so that we give partners and customers the opportunity to evaluate the technologies and provide feedback before the design is baked in. These “experimental” features are not supported by Citrix and should never be used in a production environment.
Within Citrix XenDesktop we have GPU-sharing (for XenApp and NVIDIA GRID vGPU) and also GPU pass-through. I am being asked if we have plans for multiple GPUs or vGPUs per VM. At the moment this is somewhat of a niche case but increasingly many applications are being written with multiple GPUs in mind, see here. It’s interesting technology as it opens the way for HPC and graphics to be separated or for large scale HPC.
Citrix XenServer doesn’t officially support >1 GPU per virtual machine, nor does NVIDIA validate >1 GPU per VM. However the sharp-eyed amongst you will have noticed that the XenServer team have added the capability to try multiple pass-through GPUs and it is possible to assign >1 GPU into a VM. If the any customer tries this, they should understand that it’s not a supported configuration.
You can’t use the XenCenter GUI or xe gpu-group constructs to do the GPU assignment. No VMs can be active on the machine with GPU assignments made using the GUI or xe gpu-group
You need to use the other-config:pci parameter to assign the GPUs:
<strong>xe vm-param-set uuid=<vm-uuid> other-config:pci=0/0000:04:0.0,0/0000:05:0.0</strong>
Replace 04:0.0, 05:0.0 with the bus/device/func of the GPUs to be assigned.
I’m always wary of advertising experimental features as it can lead to complaints that users want it now, but the benefits of getting user feedback is invaluable. I’d love to hear people’s experiences with particular applications and their use cases so we can assess the value of productising this in the future as well as ensuring if we do we design and test it to fit the usage cases.
We also have some other GPU experimental support around CUDA and OpenCL that I’ve previously written about, here.