When we look at cloud service providers, we see lots of different uses of the term, from innovators who are defining new paradigms of scalable, elastic, pay-as-you-go Software-, Infrastructure- and Platform-as-a-Service at the one end to legacy hosting and colocation providers who have glued some cotton balls on the outside of their racks and called them “clouds.” (I like to think of them as “The Cloud” vs. “the cloud.”) Whether fully or partly cloudy, though, one thing is certain: the greatest cost and the biggest obstacle to cost-effective elasticity is the biggest growth factor: the continuous and dramatic expansion of data, both structured and unstructured.

The most common databases by far in the cloud (and The Cloud) for storing structured and semi-structured data are Oracle’s MySQL and Microsoft SQL Server. In some cases, they’re offered by providers in true multi-tenant hosting configurations; in others, separate instances of the software are partitioned for separate clients. But in every situation, the ability of the provider to optimize their services and leverage their infrastructure most effectively depends on the ability of the environment to scale, both up (yielding the highest connection and transaction rate on each server) and out (spreading increased load as widely as needed across servers and networks with minimal degradation).

Supporting these scalability requirements efficiently, safely and securely is dependent not only on the performance characteristics of the servers and storage, but also, of the networks the data must cross. Until now, database scalability technology has focused only on the storage and server aspects. With the introduction of the new Citrix NetScaler DataStream™ technology, the network becomes a first-class citizen in assuring the scalability of cloud data.

To get the most efficiency out of each server, the NetScaler DataStream technology optimizes scale-up by multiplexing database connections, enabling each server to handle more users, more sessions, and more database instances. And to exploit the maximum power of one or more entire server farms, scale-out is optimized too — from taking advantage of health monitoring to route connections to where they can get the best performance, to managing read-only and read-write transactions to take advantage of the differing I/O and caching behaviors they require, to leveraging global server load balancing (GSLB) to assure availability.

NetScaler can now do for data what it could already do for web protocols for security and compliance, by applying user-level security, protocol validation (not only at the web protocol level but also for SQL) and auditing policies. This means that everything that passes over the network — not just web apps and presentation but the information that supports them — is safe, secure, and performant. And cloud service providers have the tools to deliver cost-effective scalability at every application tier. Even providers of reference databases and other read-only information stores now have the tools to deliver scalability and reliability without high-cost clustering.

Going forward, the same approach will be extended to benefit the even faster growing pool of unstructured data in the cloud. Cloud providers can now begin to make the network help them defuse the challenges of the data explosion.

(For more details about the new NetScaler DataStream technology, see the latest blog posts by Sunil Potti and Craig Ellrod.