Policies are the set of instructions based on configured rules which drives the incoming traffic from the NetScaler to the backend servers. They when bound at virtual server level, can do rewrite, responder, AppFirewall, IC etc. since many releases.

There was no way to evaluate the runtime statistics at vserver level through policy expressions. With 9.3 release, we added vserver based policy expressions which enables you to look at runtime statistics at vserver level before making any decision. This opened up a set of use cases:

  • Allowing requests to go to the backend servers based on
    • The Throughput of the virtual servers
    • Number of incoming concurrent connections to the virtual servers
    • If the state of the virtual server is UP, DOWN or OUT OF SERVICE
    • How many percentage of services bound to the vservers are up
    • Response time as the average TTFB (Time to First Byte) from all the services bound to the virtual server.
    • Number of requests in the surge queue of the virtual server

Connection Based Use Case

It can be seen, all the first 100,000 requests, being intercepted by “LBVSrvr1” are sent to “Svc1” service which forwards them to the backend server.  It is with the 100,001th request; the “responder_policy” evokes “responder_action”. This action responds back to the user with a page saying “Please try again later”.

Now, the question is how the NetScaler responds back at exactly the 100,001th connection at runtime?!

Here’s how:

To make this happen, we will configure a policy which usesSYS.VSERVER(“virtual_server_name”).CONNECTIONS policy expression with a respective responder action:

  • Responder Policy COMMAND

Add responder policy responder_policy SYS.VSERVER(“LBVSvr1”).CONNECTIONS.GT(100000) responder_action

    • With this command we are adding a responder policy “responder_policy”. This policy returns TRUE and evokes “responder_action” action whenever the number of concurrent connections on “LB Vsvr1” becomes greater than 100,000.
  • Responder Action COMMAND

Add responder action responder_action respondwith ‘”HTTP/1.1 404 Not Found\r\n\r\n” + “Please try again later”‘ -bypassSafetyCheck YES


This policy is bound to the “LBVserver” load balancing virtual server ensuring that whenever the number of concurrent connections exceeds 100,000, it responds with HTTP/1.1 404 Not Found and a default message to the user.

Here is the list of other Policy expressions for all the use cases:

  • THROUHGPUT: SYS.VSERVER(“sharepoint_vserver”).THROUGHPUT

This expression prefix returns the throughput of the “sharepoint_vserver” virtual server in Mbps.

  • STATE: SYS.VSERVER(“sharepoint_vserver”).STATE

This expression prefix returns the state of the “sharepoint_vserver” virtual server as UP, DOWN, or OUT_OF_SERVICE. These values when passed as an argument to EQ() operator, returns either TRUE or FALSE.

For example,

SYS.VSERVER(“sharepoint_vserver”).STATE.EQ(UP), will return TRUE whenever the state of the “sharepoint_vserver” is UP.

  • HEALTH: SYS.VSERVER(“sharepoint_vserver”).HEALTH

This expression prefix returns the percentage of services in an UP state for the “sharepoint_vserver” virtual server.

  • RESPTIME: SYS.VSERVER(“sharepoint_vserver”).RESPTIME

This expression prefix returns the response time in milliseconds. Response time is the average TTFB (Time to First Byte) from all the services bound to the “sharepoint_vserver” virtual server.

  • SURGECOUNT: SYS.VSERVER(“sharepoint_vserver”).SURGECOUNT

This expression prefix returns the number of requests in the surge queue of the “sharepoint_vserver” virtual server.

  • CONNECTIONS: SYS.VSERVER(“sharepoint_vserver”).CONNECTIONS

This expression prefix returns the number of concurrent client connections with the “sharepoint_vserver” virtual server.

These policy expressions give the power at virtual server level to gauge the incoming traffic at runtime to evaluate the traffic helping in maintaining the number of connections, to maintain the surge count, to check if the response time doesn’t exceeds a certain limit, to know if the backend servers are UP before forwarding the requests to them and last but not the least to maintain the throughput level, all at runtime!