Please enjoy this guest blog post written by Iain McLeod of  Scapa Technologies

Over the past number of years the tendency when benchmarking Thin Client or VDI environments, such as Citrix XenApp or XenDesktop, has been to use a scoring method where one system is compared against another, and a higher or lower score indicates which system is performing “better”. Within complex environments, such as VDIs, there are many moving parts (although fewer since the advent of SSD!), and we’ve long advocated that the generation of a benchmark score is not always the most suitable approach to understanding more fully how your system works.

In this blog we highlight some key features of testing that should be considered.

Firstly, measuring the end user experience of any IT system is crucial in terms of understanding its behavior. This is particularly true for complex systems, like VDIs, and any systems that support business-critical operations. By measuring both the end user and the server side performance metrics, bottlenecks attributable to network issues between users’ desktops and the back end system can be identified in seconds.

It can often be the case, however, that Thin Client and VDI system behavior is unpredictable and user experience is not constant. This can only be revealed with steady state analysis. It is vital, therefore, that tests can be run at a fixed load for a period of time, before adding additional load. It is also crucial that the ‘login’ portion of the test can be isolated from the ‘main’ load/stress/capacity test, to know if there are specific issues around this activity.

Consider now what should be tested. In many cases a predefined workload is suitable, but in increasingly complex environments, customers and vendors are interested in measuring the performance of their own bespoke or custom applications – an open scripting approach, where specific functions can be tested, allows this.

In analyzing system issues, it’s important to have visibility into performance and scalability by correlating the end user experience with the server side metrics (e.g. how long did it take for the user to see the ‘Save As’ Dialog versus how long did it take for the server to generate the ‘Save As’ Dialog?).

It’s to be expected that at some point in a performance test users will become unresponsive and begin to suffer errors. It’s vital that these errors are trapped and recovered from, either by managing the script or the session, and by reporting the cause, location and resolution.

As caching technologies improve almost daily, it is important that automated tests allow applications to be populated with complex data sets, and logic branching independent of user distribution and population.

Scapa Test and Performance Platform (TPP) for VDI is architected to support all of these requirements. Scripts that drive the application are accessed relative to its location – i.e. not only from the client side. This allows application behavior to be controlled far more rigidly. Scripting at this layer allows rich, data driven scripts with logic branching to be developed.  Interacting at the object level increases the ability to synchronize correctly with any application and to take remedial action if required.

Crucially, Scapa TPP adds in synchronization between the virtualized location and the client end points allowing accurate measurement of end user experience – for all users. The Scapa Virtual Channel is used to synchronize the script with the end user, so that the script will not run ahead of the presentation layer at the client end point. The control logic, parameterization and Scapa log information all utilize this virtual channel to feed back to the controller.

Scapa TPP’s default (but not its only) load model is steady state load testing.  This is where the load is run at a constant level for a period of time, before increasing or ramping in additional load. Load can also be ramped off the system during run time. This is unique to Scapa TPP and enables identification of the edge of capacity of a system.

It is imperative that tests can be run at a fixed load for a period of time, before additional load is applied, and equally important that the ‘login’ portion of the test can be isolated from the ‘main’ test. If a system is in steady state, then the recently observed system behavior will continue into the future – if all is well. As previously stated, the behavior of complex systems can often be unpredictable and user experience inconstant. This can only be revealed through testing at steady state. Simply adding load every ‘N’ seconds does not constitute a proper test and diagnostic solution as the system is always in a transient state.

Tied in to this is Scapa TPP’s ability to present in graphical form the live correlation of end user metrics with back end performance counters. Being able to see a relationship mapped in real time between increased user count and, for example, disk usage allows earlier diagnosis of problems and quicker benchmarking.

Scapa TPP also provides reliability – if sessions or applications fail, these failures are recorded and recovered from. Failures do not impact the availability of test data – even if the entire system disappears during a test there will still be data to analyse.

If you’d like to see these features and more in action on your Thin Client, VDI or other system environment, you can contact Scapa Technologies directly via the website: www.scapatech.com/contactor email sales@scapatech.com