Doing nothing does not mean NPV=0

I’m fortunate that I’ve been able to speak directly with several hundred customers across multiple vertical industry segments in my time at Citrix, ranging all the way from systems engineer to the C suite. A key observation that also reflects my own personal experience is a focus on strategy before consideration of hardware, network and device choices etc. This is what the smartest and most successful customers do. They don’t get caught up in the social media hype debates of doing nothing as a strategy which often overlooks the fact that the Net Present Value of doing nothing is not zero. Strategic conversations involve a far more thoughtful process than simply assuming desktop virtualization is not for you. I’ve seen this time and time again from customers both large and small. As more customers begin to understand that desktop virtualization is a strategic conversation, greater numbers will start to move away from the status quo. This is beginning to be reflected in analyst reports where the estimated penetration of desktop virtuzliation is going up year after year as the message becomes more broadly understood and accepted.

So what is that strategic conversation?

I think it’s worth unearthing an old Citrix document that I think still very much holds true today. I’ll break the document into 2-3 blogs for easier consumption, but hopefully it will provide some deeper insight into the thought process that many successful customers go through.

It begins with the famous red Access button from Citrix. The message still holds very true today. Users need access to their applications and data. That could be a full desktop session or just an application session. Marketing may have tweaked the message, but fundamentally it’s all about getting to your stuff anytime, anywhere from any device.

Obviously we can’t just think of applications and data in today’s world as a single entity or even for a single use. The diagram below shows how applications were thought of at the time. We know today that the definition has become even broader in recent years with the advent of SaaS apps, local mobile apps, and in time a greater number of PaaS based applications.

So clearly there are different types of applications that need to be delivered, but there are also different types of people that need to be served by IT. I see this all the time from call centers to power users, remote mobile workers, contractors, outsourcers and so on. There’s even a consumer angle here, and many organizations I talk to increasingly have a need to deliver IT services to an ever diverse consumer segment.

All this diversity means that different types of users will vary because they will need different sets of applications.

To deal with this variance. The concept of distributed computing has driven most organizations to put a PC on every desktop as the default appliance for every user served by IT. Of course as we all know, the majority of applications delivered are installed and executed locally. As a result one has to manage the PC, deliver over the network and enable distributed systems management to drive down the overhead costs.

What’s surprising is that still many organizations are in the business of image management. We still haven’t achieved the vision of a truly stateless desktop. Even with the maturity of today’s systems management tools, many people I speak with still resort to at least several standard images with local apps installed in the image. Once the image is deployed these local apps then become managed by systems management tools and in many cases what you really end up with is an image per user. So starting with a golden image is achievable, maintaining it is no easy feat especially as your people use cases become diverse. Note this is not a scale problem, it really is IMO a people use case diversity challenge.

This “one size for all” approach to application delivery creates a very costly, complex and inflexible environment in which to manage business and technology change. It becomes more costly, more complex and less flexible as business/technical change leads to further incrementalism in delivery mechanisms that were not designed to handle a new world.

This has forced us as an industry to rethink over the years how to better approach the problem.

We’ve seen application virtualization offered by many to enable better isolation of applications and remove the hard coded installed nature of local apps. We’ve seen this isolation technology coupled with streaming technology to optimize delivery and further optimization available at the network layer.

This is all wonderful, but this has been talked about for years and only now are we beginning to get any serious traction with application virtualization and streaming. Network based acceleration and optimization have been far more successful. I’ve personally been very disappointed with the rate of adoption of application virtualization and streaming in the broad customer base given the length of time the technology has been available irrespective of any particular vendor solution. This inertia however, has not resulted in business needs not evolving. IT departments continue to remain under pressure to innovate forward to enable their businesses to be more flexible and agile. All while positioning their firms for growth and improving productivity. A lot of the technology evolution that has occurred therefore needs to be carried forward into the new model as we rethink the desktop strategy.

To be continued in part 2.