When Fast Ramp is ON
In general, by default Fast-Ramp will be enabled in the Citrix NetScaler. In addition, TCP slow-start will be in effect. However, having Fast-Ramp enabled will eliminate the effects of TCP slow start, because it bypasses the TCP slow-start mechanism. With Fast-Ramp enabled the NetScaler merely starts with the congestion window of the freshest server connection. When you enable the Fast Ramp mechanism, the NetScaler appliance skips the usual TCP Slow Start mechanism when linking a client connection, mostly the fresh one, to an existing server connection. Additionally, the connection reuse in NetScaler system links the client connections to connections on server side which are in the reuse pool. The clients are selectively linked to already established server connections (picked from the reuse pool) based on their maximum segment size (MSS). Since slow start happens at the TCP connection initiation time, only the very first client is affected by the slow start overhead. Subsequent clients are linked to the persistent, server-side connections that have already undergone slow start once, so they build up their window rapidly and thus get the benefit of fast ramp.
When Fast Ramp is OFF
If the NetScaler system’s fast ramp feature is disabled, every new client request (TCP connection) that comes to the NetScaler system will undergo slow start.
Slow-start is one of the algorithms that TCP uses to control congestion in the network. TCP Slow-Start works by increasing the TCP congestion window each time the acknowledgment is received. It increases the window size by the number of segments acknowledged. The congestion window is one of the factors that determines the number of bytes that can be outstanding at any time. When a connection is set up, the congestion window, a value maintained independently at each host, is set to a small multiple of the maximum segment size (MSS) allowed on that connection. The congestion window is increased based on ACK’s or decreased based on traffic in the network, or time-outs occurring.
Endpoint vs. non-Endpoint mode
Endpoint mode and non-Endpoint mode have nothing to do with bridge or reverse proxy connections. Most often and by default, the NetScaler will operate in non-Endpoint mode. In non-Endpoint mode, the NetScaler will submit to the Fast-Ramp setting and TCP Slow-Start mechanism above. The only time NetScaler will not submit to the Fast-Ramp setting and Slow-Start mechanism, is when it operates in Endpoint mode. In Endpoint mode the NetScaler will manage the client and server connections separately as it in essence takes over for the server with regards to managing the connections with the client.
Endpoint mode only occurs when the following features are enabled on the NetScaler:
- TCP Buffering
- SSL Offload
So if you are really concerned about avoiding TCP slow-start, then you can either enable the Fast-Ramp setting, or enable one of the TCP Buffering, Compression, AppFW or SSL Offload features. However, at that point you will be mucking with the TCP settings, so be careful.
Load Balancing Slow-Start
Load Balancing Slow Start in the NetScaler, also known as Startup Round Robin, is different that TCP Slow-Start.
Load Balancing Slow-Start is enabled if any of the following load balancing methods are configured on the VIP:
- Least Connections
- Least Response Time
- Least Bandwidth
- Least Packets
Load Balancing Slow Start engages when one of the following conditions are true:
- Load balancing method changes to one of the methods mentioned in the preceding list
- A new service is bound to the virtual server
- When a service changes its state from DOWN to UP
- When a service bound to the virtual server is enabled
Startup RR Factor – default
By default the newly configured virtual server remains in a Slow Start mode for Startup RR Factor of 100. For a virtual server that is already configured and is serving the production traffic, when the services are enabled or the services are up, the time to exit Slow Start is calculated using the following calculation:
- Request rate = current instance value – previous instance value (before 7 seconds)
<span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif;font-size: 13px;line-height: 19px">If the appliance has seven packet engines with 10 services bound to the virtual server, and the </span><span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif;font-size: 13px;line-height: 19px">request rate is 100 per second, then the virtual server exits the slow start mode when it reaches </span><span style="font-family: Georgia, 'Times New Roman', 'Bitstream Charter', Times, serif;font-size: 13px;line-height: 19px">100 hits x number of packet engines (7) x bound services (10) = 7000 hits</span>
When Slow Start engages, it simply puts packets into the Round Robin processing queue. To exit that, a threshold has to be reached and it is based on the calculation:
- 100 hits x number of packet engines (7) x bound services (10) = 7000 hits
- 100 hits is the default value.
It can be set to something else, by the following:
- > set lbparameter startupRRFactor 100
Startup RR Factor
The appliance can alternatively be configured to require that a specific given number of requests should pass through the virtual server before exiting the Slow Start mode.
Run the following command to set this configuration by using the Startup RR Factor:
- > set lbparameter startupRRFactor 5
If the appliance has seven packet engines with 10 services bound to the virtual server and the startup_rr_factor is 5, the virtual server exits the Slow Start mode when it reaches the following:
- 5 hits x bound services (10) x number of packet engines (7) = 350 hits (max)
As soon as one of the packet engine gets 50 hits for that virtual server, it comes out of the Round Robin mode and broadcasts the message to all other packet engines. Even if all other packet engines have not yet received the 50 hits, it will still come out of the Round Robin method.
The purpose of this has to do with existing load on existing services/servers. If you have a Load Balancing VIP with “Service A” having 100 connections, and “Service B” having 200 connections, for example. Then you add another “Service C” that has 0 connections to start with, the obvious will happen. Service C will be slammed with traffic. So it is placed into Round Robin until the threshold has been met to even the load and slowly ramp up all the services bound to the VIP.
Fast-Ramp, Slow-Start and Endpoint modes: CTX124714
Load Balancing Slow-Start: CTX108886
Read more about Craig Ellrod here