So far we understood what is not working in today’s TCP implementation and how MPTCP comes to rescue (/blogs/2013/08/23/networking-beyond-tcp-the-mptcp-way/). As we prepare ourselves to play a major role in Mobile world and transport requirements, MPTCP implementation was a natural fit. In Mobile world the client side changes happen at rapid rates as compared to Servers/Apps. Mobile clients are able to adopt newer technology changes and be ready to work on the preferred parameters. If we wait for MPTCP to be adopted deep inside Datacenter then it will take long time and still there will be inconsistencies. NetScaler provides ADC functionality sitting in front of Application and Server farms thus all the client traffic terminates at NetScaler in most deployment scenarios. Thus the real path to be optimized is between Client and NetScaler.

This kind of deployment architecture makes NetScaler the perfect device in the access path to act as MPTCP proxy. As the Mobile clients connect to NetScaler, we can enable MPTCP session and work with Mobile devices to use best possible communication path. In most cases Mobile clients connect over long distance and sloppy network paths. Thus this leg of the access path requires MPTCP optimization whereas the Server side of infrastructure sits in Datacenter with high speed links enabling faster data transfer. NetScaler as MPTCP proxy will be able to connect to Clients over MPTCP and transform the MPTCP sessions into normal TCP connections for the server side network. Thus backend network need not be aware of MPTCP usage in front end. Here is a quick view of how this looks like.

The beauty of NetScaler implementation is that every L7 feature including policies, caching, firewall and others would work the same way they work today for TCP. Thus in the proxy mode we would let every feature operate the same way over the TCP sub-flows in MPTCP session. On NetScaler before transmitting data over to backend, we will ensure MPTCP sequence space is updated and is in sync with every sub-flow state. Congestion control and avoidance is applied at every sub-flow level. When the response comes back to NetScaler, the MPTCP stack will identify best sub-flow to send the response data to client.

You can enable/disable MPTCP using TCP profiles which can be bound to global impacting whole system or can be bound to specific bind points impacting specific vserver/service traffic flow. There are bunch of MPTCP setup parameters allowed to be configured in TCP Params.

–          mptcpConCloseOnPassiveSF – Connection close event for passive sub-flows.

–          mptcpChecksum – MPTCP DSS checksum

–          mptcpSFtimeout – Timeout for idle mptcp sub-flows

–          mptcpSFReplaceTimeout – Minimum idle time for sub-flows before they get replaced

–          mptcpMaxSF – Maximum number of sub-flow connections supported

–          mptcpMaxPendingSF – Maximum number of sub-flow connections supported in pending join state

–          mptcpPendingJoinThreshold – Maximum system level pending join connections allowed

–          mptcpRTOsToSwitchSF – Number of RTO’s at sub-flow before picking other sub-flow

–          mptcpUseBackupOnDSS – If NS receives a DSS on a backup sub-flow, start using that

As you see there is enough control provided to administrators here for working with MPTCP stack. MPTCP support is GAed with MR2 build 119.7 of NetScaler release 10.1. We encourage you to use the feature and provide us feedback…