HTTP as a protocol has captured our Web and immaterial of what we do online, we use it under the hood all the time. NetScaler was initially developed to be a pure HTTP proxy and thus core of the strength and value has been to handle HTTP flows well in varying scenarios. The focus is to ensure that we always optimize and accelerate the experience over HTTP and any App payload over HTTP stream. HTTP use cases and behavior can differ from deployment to deployment and also from Application to Application. There are various cases where you want to apply strict HTTP behavior ensuring protocol semantics are followed in every part of the communication while in other cases you might want to keep it relaxed for generic clients.

Importantly the HTTP profiles let you alter the behavior of NetScaler’s HTTP stack and customize it for the Applications deployed through NetScaler. So you can define your own profiles and can use multiple profiles on the same NetScaler system. Essentially based on the Application behavior you can define and use different HTTP profiles on single NetScaler system. The HTTP profiles are supported from last couple of releases and let us look at what control and optimization parameters they have on NetScaler version 9.3 builds.

The above is the “default” profile which is globally enabled on all HTTP endpoints in the system. All the HTTP flows passing through NetScaler are impacted by this profile unless they have a customized profile bound to them. Let us dig through the core parameters here which are all generic and you can use them in your customized profiles as well.

  • Number of entities using this profile
    • Shows how many entities are using this profile on the system 
  • Max connections in reuse pool
    • NetScaler maintains reuse pool for every entity on backend 
    • Reuse pool helps keep the connections active and ready to use 
    • The parameter controls how many such connections you can have 
    • Keeping value 0 means no control, which should be fine for most use cases
  • Incomplete header delay
    • NetScaler or any other proxy is expected to receive complete http header
    • It should receive all, before the request is passed on to backend App/server
    • Because of network latency and long headers, it may take time to get all
    • This is how HTTP RFC expects us to work but it is being exploited by attackers
    • Recently we had the Slow Header attacks exploiting this part of HTTP behavior
    • NetScaler has built-in protection for such attacks and this setting does play a role
  • Request Timeout
    • This parameter defines the time after which NetScaler should timeout the request
  • Connection Multiplexing
    • Connection Multiplexing delivers the core value for HTTP processing
    • It allows NetScaler to multiplex backend connections for better performance
    • Should be enabled for most of the scenarios and Apps
  • Drop Invalid HTTP Request
    • While doing strict protocol check, HTTP requests can be marked invalid
    • The requests would be invalid if headers are not complete
    • There are many cases under which the request is marked invalid
    • By default NetScaler does not drop these requests
    • We let them flow through the system but stop tracking them
    • Such requests do not go through any policy evaluation module
    • In usual Web and Internet scenarios, we see such requests are common
  • Mark HTTP/0.9 requests as invalid
    • HTTP/0.9 is the older HTTP specification and had issues with it
    • Now most of the Web runs over HTTP 1.0 and 1.1 protocols
  • Mark CONNECT requests as invalid
    • Connect requests are used by specific proxies to get tunnel behavior
    • Once NetScaler receives a Connect request it sends it to server without any L7 processing
    • Thus no policy or any other feature processing will be applicable here
  • Compression on PUSH packets
    • Defines the behavior to handle packets with PUSH flag set
  • Drop extra CRLF
    • At times the HTTP clients and scripts insert additional CRLF
    • This can cause confusion at server end and hamper the processing
    • Thus better to drop such extra bits as part of NetScaler HTTP parsing
  • Drop extra data from server
    • HTTP servers at times respond back with more data than expected
    • The server side response can be Chunked or Content-Length
    • Both have their own ways to track exact amount of data
    • Thus NetScaler has a way to detect the additional data
    • With this setting we can drop the extra data while response passes through us

Quite interesting, isn’t it? In core NetScaler allows you to define your own ways to customize and use HTTP parsing and conformance stack. Meaningful usage can always help build performance oriented and stable application deployment…