In part 3 of the “PVS Secrets” blog series I’d like to focus on a combination of two settings, that can influence the performance of your PVS environment if not set correctly.
The two settings I’m talking about are:
1: Port Range: This setting indicates a range of ports to be used by the Stream Service for target device communications. By default this range includes 20 ports (as shown below):
2. Threads per port: This setting indicates the number of threads in the thread pool that service UDP packets received on a given UDP port. Larger numbers of threads allow more target device requests to be processed simultaneously, but is consumes more system resources. By default 8 threads per port are configured (as shown below):
When we do the math it turns out that a PVS server is able to process 160 concurrent target device requests by default:
20 Ports * 8 Threads per port = 160 concurrent requests
So, if you’re streaming more than 160 targets using a single PVS server you may end up in a situation where the streaming service cannot process incoming requests right away, as all ports and threads are used by other targets. In this case the rejected target will continue to work without any issue, but you will see a higher read latency for that target as the target has to resend the request and the performance will suffer.
In an optimal scenario you should configure a ports / threads combination that equals the amount of active target devices. So in short, for best performance,
“# of ports” x “# of threads/port” = “max clients”.
It doesn’t really matter if you achieve the sufficient amount by increasing the number of ports or the number of threads per port, but our lab testing has shown the best StreamProcess performance is attained when the threads per port is not greater than the number of cores available on the PVS server. Don’t worry if your PVS server doesn’t have enough cores. You’ll just see a higher CPU utilization, but CPU utilization has never been a bottleneck for PVS.