Does the pic look familiar? That’s the reality of our day to day life and most of the time we just follow the queue blindly. Every queuing system can be questioned for its efficiency and can be made better and the same principle applies to queues in our computer systems.
NetScaler maintains so many different queues for various features and capabilities in the system. The queues work beautifully and most of the time as an end user you would not even notice where the queue is maintained. Couple month back we were reviewing the internal queues maintained in NetScaler and the thought was to improve the queue management and handling from what we do today.
Let us take a simple example of client connection handling on NetScaler. We pick up the client connections and based on the LB/persistence logic hand them over to the service selected. What happens when the service gets overloaded or the backend server is not able to process any more traffic? Then the client connections will start queuing up on the service and the queue starts to grow. Most of the protection features are used at per service level today and thus while making the LB decision the vserver does not know the state of these features like HTTP DoS protection, Priority Queuing or Sure Connect. Hence the service selected at times may not be the best one considering these protection features. All the prioritization happens at the service layer and protection logic is applied here thus across multiple services the prioritization would differ based on local service level queue.
Other common problem seen with service level queue is when the service goes down for any reason the whole queue is cleaned up and held connections go off. If you combine all these together with the fact that today the application logic is mostly applied at the endpoint/vserver layer then you see the improvement opportunity. Is the story coming together…??
With NetScaler 10.1 release we introduced the improved queuing mechanism with maintaining the queue at vserver layer. Again as an end user you would not see much of a difference from outside as most of this is handled internally but you will certainly appreciate the benefits which come along.
- First and foremost – Single point of control
- Play around with Queue size at the vserver/App
- Accurate application of priority across services
- HTTP DoS protection applied at vserver/App
- Even more intelligent service selection mechanism
- Influencing LB decision for better service resource utilization
- Finally queued connections do not drop if the service goes down
Sounds interesting, isn’t it? Stay tuned for the next blog detailing into the vserver level queuing.