Load balancing uses a number of algorithms, called load balancing methods, to determine how to distribute the load among the servers. When a Load balancer is configured to use the least response time method, it selects the service with the least number of active connections and the least average response time. The response time also called Time to First Byte, or TTFB is the time interval between sending a request packet to a server and receiving the first response packet back.
Least Response Time Load balancing allows you to distribute client requests across multiple servers. Load balancers improve server fault tolerance and end-user response time. Load balancing distributes client requests across multiple servers to optimize resource utilization. In a scenario with a limited number of servers providing service to a large number of clients, a server can become overloaded and degrade server performance. Load balancing is used to prevent bottlenecks by forwarding the client requests to the servers best suited to handle them. Thus, balancing the load.
In a load balancing setup, the load balancers are logically located between the client and the server farm. Load balancing is used to manage traffic flow to the servers in the server farm. The network diagram shows the topology of a basic load balancing configuration. Load Balancing can be performed on HTTP, HTTP, SSL, FTP, TCP, SSL_TCP, UDP, SSL_BRIDGE, NNTP, DNS, ANY, SIP-UDP, DNS-TCP, and RTSP.
The following example shows how a Load Balancer works using the least response time method. The load balancer selects the server by using the value (N) of the following expression:
N = Number of active transactions * TTFB
The load balancer delivers the requests as follows:
- Server-3 receives the first request.
Note: The service with no active transaction is selected first.
- Server-3 receives the second and third requests because the service has the least N value.
- Server-1 receives the fourth request. Because Server-1 and Server-3 have same N value, the load balancer performs round robin. Therefore, Server-3 receives the fifth request.
- Server-2 receives the sixth request because the service has the least N value.
- Server-1 receives the seventh request. Because Server-1, Server-2, and Server-3 have same N value, the load balancer round robin. Therefore, Server-2 receives the eighth request.
Whether it’s load balancing XenApp Web Interface, iPhone/iPad resources, websites, linux servers, windows servers, e-commerce sites, or enterprise applications, NetScaler is the perfect choice. NetScaler, available as a network device or as a virtualized appliance, is a web application delivery appliance that accelerates internal and externally-facing web application up to 5x, optimizes application availability through advanced L4-7 traffic management, increases security with an integrated application firewall, and substantially lowers costs by increasing web server efficiency.
Citrix NetScaler is a comprehensive system deployed in front of web servers that combines high-speed load balancing and content switching with application acceleration, highly-efficient data compression, static and dynamic content caching, SSL acceleration, network optimization, application performance monitoring, and robust application security.
Available as a virtual machine, the NetScaler is perfect for load balancing virtual servers in the datacenter or in the cloud.