(When I’m at a loss for a quick breezy title of my own to sum up the subject at hand, I resort to song or album titles. Thanks this time to the late Peter Allen – I considered Leonard Cohen’s “New Skin for the Old Ceremony” instead, but, on the one hand, it might be too obscure… still, on the other, it’s Leonard Cohen! But to the point…)
You can’t swing a dead cat these days without hitting a pundit talking about cloud computing. (For that matter, there’s probably an online service that will rent you a virtual dead cat to swing for $0.07 an hour, and market it as Dead Cat as a Service, or DCaaS.) And while many have seen great promise from cloud computing concepts, others have been asking – especially in the context of enterprise computing – “Haven’t we seen <concept X> before under another name?” And it certainly can appear that vendors and venture capitalists in search of The Next Big Thing have dug up computing concepts of the 1990s (or 1970s), glued fluffy cotton to them, and sold them as “clouds.”
So… new technology, or new names for old? Not to confuse the issue, but: a bit of both, building on things we’ve seen before, on trends that have ebbed and flowed before, but with characteristics and priorities that make all the difference.
Let’s start with the basics.
Is “Software as a Service” (SaaS) the new Application Service Provider (ASP)? Yes, but.
From consumer apps to core business capabilities, the companies that have started up as SaaS providers or survived the transition as the lucky few who climbed their way out of the ASP graveyard have made their mark by building offerings that are distinguished by their web-friendly user experience, their pay-as-you-go-and-grow economics, their ability to support multiple tenants/users/organizations securely, and in most cases, web-services APIs for “mashup” integration with other SaaS offerings as well as your own custom systems. Meanwhile, most of the ASPs in said graveyard got there by taking enterprise apps, sticking them on a farm of servers in their datacenters, installing a copy for every customer, and slapping a usage meter on the front.
And what about Infrastructure as a Service (IaaS) and Platform as a Service (PaaS)? Are public IaaS clouds the new colos? Are public PaaS offerings the new shared hosting? Are private clouds the new clusters? Yes, but.
Oracle’s Larry Ellison made waves a couple of years ago when he mocked the entire notion of cloud computing, calling it a fashion, insisting that “private clouds” were just the nom-du-jour for clusters (and, after all, they’ve been doing clusters for years, so what’s the big deal?) – with a lot more insult and scorn along the way, of course, because, well, that’s how he rolls. But as with SaaS, the key is in the evolution, the essential attributes that make clouds clouds.
The user experience of the cloud – public or private, IaaS or PaaS – must reflect its networked nature. While there are some functions that require or benefit from specialized clients, the bulk of the user experience must be as web-based and light-weight as possible. Self-service – bounded, of course, by policies driven by security, organization, and economic constraints – is a fundamental need.
Whether internal or external, the cloud is distinguished by its economics. And by this, I don’t mean that resources are fundamentally cheaper in the cloud – cases can be made simultaneously for both sides of that argument, and supported by significant data – but rather, that their costs are transparent. For public clouds, whether the commodity is compute power, storage, or application access (or, of course, all of the above), the charges are well-defined in terms of processing time, storage capacity, network traffic, and the like – there they are on your web console, your invoice, and your credit card. Internally, though, the same economic assumptions are necessary: the dynamic resources of the cloud make it possible and necessary that resource utilization be tracked – and potentially charged back, whether against real budget funds or CorporateITBuck$(TM).
…All of which enables multi-tenancy, another important attribute of cloud implementations. If resources are to be used dynamically, they must be sufficiently provisioned to address peak demand – but with both sufficiently fine-grained control (of capacity over time) and sufficient sharing to ward off a repeat of the “just-buy-more-servers-sprawl-is-our-friend” 1990s. With external clouds, the argument for multi-tenancy is clear; in premise-based implementations, the tenants are likely to be divisions or projects, but the same concerns hold.
The approach that delivers the greatest flexibility and savings is the combination of public and private – what is sometimes called the “hybrid cloud” – the cloud-extended datacenter. By meeting capacity and flexibility requirements according to a 90-10 rule (or 80-20 or 95-5 – your results may vary), organizations can meet most of their own requirements while relying on one or more cloud provider to handle the exceptions. While some of these exceptions are intermittent use of the same sorts of resources used internally – burst capacity to deal with peak usage periods, offsite recovery and business continuity services to deal with exceptional failures – some of these “exceptions” may be ongoing exceptions to in-house expertise, such as integrated use of specialized web-service-integrated applications.
That openness to integration is another hallmark of the cloud: APIs, typically web-services based RESTful interfaces, that expose applications, operations, storage, compute power, and other resources for use in end-to-end business processes – making (for instance) your in-house accounting applications and offsite protected storage fit together as well as your Facebook status updates, Twitter tweets, and Flickr photos. One of my managers, back in the olden days when FORTRAN dinosaurs walked the earth and I wrote the occasional line of useful code, once told me, “Never write anything twice. Or, if you can help it, once.” Fine-grained accessibility of sockets to cloud-based resources increase reusability – and, through it, both productivity and quality.
Each of these aspects that differentiate cloud computing from what has come before it – utility computing, capacity on demand – drive further requirements. For instance, to build a cloud-extended data center securely and cost-effectively and to take advantage of fine-grained control and integration of services, you need to protect and extend both the network for access and the directory for authentication and authorization. These are among the gaps that need to be filled (or, to be more technology-specific, bridged) for cloud-extended datacenters to be mainstream for businesses of all sizes.
So, to return to the original question: haven’t we seen all this before? Yes, but… not really.
(Which is a lot like the conversation: “Isn’t this online apps thing just “thin client”/X Windows/3270 terminals?”)
Meanwhile, gotta go – I have a virtual dead cat to tweet about.
(Thanks to Citrix’s Brian Young for “DCaaS” – just when I think things can’t get any more surreal, he turns the dial to 11.)