We often say “NetScaler was running the biggest clouds before they were even called clouds.” What we’re getting at is that NetScaler got its start powering the original “dotcom”/”webmonster” data centers. Looking back, we now realize that the original e-commerce data centers have evolved — in many cases in the most literal sense — to become today’s public clouds.
The underlying network of a cloud is fundamental to its elasticity. The reason that NetScaler plays such a prominent role in cloud network architecture is because NetScaler provides the core abstraction and traffic management necessary to shift data center resources around without bringing application services down. This applies to both planned and unplanned downtime. So, the availability and elasticity of a cloud at the macro level is predicated upon the underlying network, and NetScaler is frequently a core component of the network. The cloud is built on the network.
Where things get interesting is when we bring an application workload to a cloud. In most cases, the underlying network services of the cloud itself aren’t directly exposed/available to cloud users. In fact, that’s pretty much the point of the cloud. However, that doesn’t mean that we don’t want/need a variety of network services such as security, acceleration and, yes, good old load balancing and traffic management.
So, we wind up running a set of customer-specific network services on top of the cloud, which then leads to the question: what should those network services be?
Ultimately, I think this comes down to whether you believe you’ll run your entire infrastructure within a single cloud, or if you believe that your infrastructure will be split across a variety of data centers, some public/off-premise and some public/on-premise. For most of us, it’ll be the latter.
In this case, we need to think long and hard about whether we want different network infrastructure stacks – especially for L4-7 services that are so closely tied to our application lifecycles – for each cloud. In most cases, the answer here will be “no”. It’s far better to have commonality wherever possible. This not only drives down our costs, but more importantly it makes it easier for us to move workloads from data center to data center.
This is why we’re excited to offer a technology preview of NetScaler for AWS. If Amazon is a cloud you use, it’ll allow you to get a feel for how you can take advantage of NetScaler’s various traffic management, acceleration, security and offload capabilities within your larger AWS deployments. You can also get a feel for how advanced NetScaler capabilities can be used. For example:
- Enhanced NetScaler DataStream SQL intelligent load balancing and DBMS caching improves database scale, availability and performance for AWS-based DBMS farms.
- NetScaler 10 for AWS provides unprecedented visibility via AppFlow, enabling you to leverage your existing on-premise management tools to obtain real-time insight into application and data traffic without having to install and maintain expensive and cumbersome probes and agents.
- New Action Analytics compliments NetScaler AppFlow with real-time monitoring of web application and data traffic within the AWS environment, and provides adaptive controls to automatically change or create policy in real-time. For example, caching policy can be automatically adjusted in real-time such that the top n most frequently requested objects are always cached. Or, if response time exceeds a certain threshold, tracing can be automatically turned on to aid in troubleshooting.
So, if you use Amazon, take the NetScaler for AWS tech preview for a spin. Those of you that are familiar with Amazon know that it’s a different breed, especially at the network level. Therefore, for the tech preview, there are some things about the provisioning process that may look a little quirky at first. However, it’s pretty straightforward, especially if you know your way around Amazon.