We all know these awesome people. The ones who seem to get so much done. They achieve a lot and also seem to be able to do it, again and again.
Love ‘em or hate ‘em – they never cease to amaze us mere mortals. Watching them closely though, you can see there are at least 2 types. Just for fun, let’s call them the preparer-finishers and the flat-out types.
The preparer-finishers are organised, systematic and take care to make sure they don’t take on another task until the previous one is well on the way to being safely handled and the preparations for the next are ready. The flat-out types focus on taking on more and more tasks, multi-tasking to an increasing degree until they reach their limit and then using their stamina to keep this load as high as possible, as they then grind progressively through the work.
Having observed and admired both types–bothin the workplace and on the personal front–it’s always fascinated me trying to work out which approach is best in general, or alternatively, under what circumstances is one approach better than the other?
Somewhat surprisingly, recent innovations in the XenServer hypervisor platform provide an insight into this question. Even better, it gives us a tunable model for seeing precisely which method works best. (i.e. how much preparing and finishing you need to blend with the flat-out periods, and what are the consequences on you–and others who work & live with you–of doing tasks more one way than the other).
Let’s start with a “real-life” scenario.
You come into work on a Monday morning. It’s a big week. You and your team have to kick-off a whole series of tasks across your project or organisation. So you try an experiment – and start half your team with the more controlled, preparer-finisher approach. You start parcelling out the tasks, making sure each person is underway and has the time and bandwidth to keep the lists and plans updated as well as actually doing the task itself. The other half of your team you take the flat-out approach. You tell them to slice through the work as quickly as they can, and you give each of them as many tasks as fast as possible. This means loading them up with as many as they can handle at one time, and giving them more as soon as they complete tasks already on their pile.
A contrived scenario perhaps? Far from it. In fact, this is the equivalent of what we call a storm in computing. This may be a bootstorm, where all the users of a system start logging in, or it could be when users start spinning-up or hammering a system when a factory or store opens for work or as different time zones make use of clouds like Amazon, Rackspace or Softlayer. So computers go through this kind of scenario every day – and sometimes several times a day.
So, place your bets, let’s see who’s going to win.
Using the XenServer Dundee Project, our Engineering Team have upgraded the underlying Linux platform to CentOS 7 and exploited cgroups to place control plane daemons into a separate cpu control group. What this piece of techno-babble means is that it allows us to model the preparer-finisher approach. It lets the system carve out the time and processing bandwidth to stay organised, and to both finish one task and prepare for the next. This is less flat-out than previous XenServer versions, where the cgroup technology wasn’t available.
Let’s look at the results for a computer equivalent of the split-team challenge described above – a bootstorm.
The graph below shows the total amount of time (on the vertical axis) to complete all of 125 separate tasks (shown on the horizontal axis). Red points show the times for the preparer-finisher approach using cgroups. The green points show the flat-out approach.
The difference in the two approaches is quite striking.
The preparer-finisher (cgroup) approach completes all tasks in less than 60% of the time of the flat-out approaches. This is because – with many virtual machines (VMs) going flat-out doing lots of input-output – the system can find itself unable to keep the control-plane moving (handling the preparation and finishing aspects of the tasks) as it is overloaded by the data-plane actually doing the work itself.
Hence, the results are quite clear. To be a high-achiever, carving out the time to prepare and finish work is preferable to going flat-out.
There’s some other intriguing information which we can glean from this model. That relates to how the system behaves when it’s working under this kind of stress. Much like their human counterparts – when working hard, computers and people respond differently when asked to do “just one more thing”.
As a person, I can be a bit unpredictable in how long it’s take me to respond to do something new when asked – when I’ve got a lot of other things going on. Similarly – once I’ve started on yet another task in addition to the ones I’m already doing, then it takes me longer to finish it – than if I was not multi-tasking to such a high degree. I’m only human after all…..
Lo and behold, systems like XenServer are just like us humans in this regard. You can see this from the following graph which shows the amount of time taken (on the vertical axis) to complete each of 125 tasks (shown on the horizontal axis). Again – the red points show the times for the preparer-finisher approach using cgroups and the green points show the flat-out approach.
Again – the results are stark from the new XenServer Dundee Project. They show that by using preparer-finisher approaches like cgroups then we get two additional useful benefits :
The time taken to do each task in green is roughly the same – no matter how many tasks are requested.
There is much less volatility (i.e. the red lines are smoother in both graphs) from task to task as the system gets loaded up.
This makes sense. Simply put, cgroups give that dependable, high performance, less ragged approach to work. The kind of individual and systems everyone likes to be around and work with. What this means for XenServer end-users is that this protection of the control plane processes will bring not only improved bootstorm times, but also increased responsiveness for cloud administration tasks, remote access and increased stability of the host under intense I/O load from VMs.
It should also lead to greater VM densities – and so enable reductions in the total-cost-of-ownership (TCO) for large system users with large computing pools. Hence, watch out for speed benefits with Cloud Platform and OpenStack running on XenServer and well as for the large number of XenDesktop users running on XenServer (now in the majority).
So – not much ambiguity here. If you want to be a consistent high-achiever, then the flat-out approach doesn’t get you to where you want to be – and that you need preparer-finisher approaches like cgroups in the new XenServer Dundee Project.