Difficult to imagine? From our grandparents days Networking across systems is working reliably over TCP and that is what we have seen all throughout. The systems at either end of the network did not have to bother how the TCP connection was being established so the core definition of TCP was “a single connection between two hosts”. While researchers designed TCP/IP protocol suite, they did an awesome job on looking through the requirements which may come up in next couple decades. Given their vision till today we are able to communicate well over TCP.
But what did change in between? The network of devices or the Internet grew at an unexpected rate and broke all the predictions. The internet backbone traffic in 1990 was close to 1 Terabyte which grew to nearly 35000 Terabyte by year 2000. What an exceptional growth and large businesses started transforming themselves on Internet. Was the TCP designed to take up this much load without getting slower and getting to a point where it starts breaking? While all this growth was happening, in the background researchers continued to work on simplifying the congestion control issues with TCP and many new RFCs came up and got adopted as well. Today we all are able to work efficiently using these complex congestion control and avoidance algorithms.
Beyond the scope of TCP, Routing protocols play a critical role in networking today by finding the shortest of the multiple available paths between any two hosts. Once the upper layer sets the properties which should be basis of selecting best path, most of the times best is not really the one which gets picked up. Once the path is picked up then the whole communication is bound to that single path or few equal cost paths for packet transmission. This certainly results in ignoring the possible alternate paths which become available for data transfer between same set of hosts. Even when the routers are trying to make intelligent decisions on distributing packets for same TCP flow across multiple paths, there is a concern of packets reaching out of order. This in turn kicks off the congestion control algorithms and after the threshold TCP believes the packet was lost and retransmission happens. It also reduces the rate of sending the packets by reducing the congestion window by half. To avoid this issue Routers try to ensure that packets belonging to single flow are sent through same path.
You must be asking why to even bother about multiple paths? Good question and the answer is with you… do you carry a smartphone with internet connectivity? Does your phone connect over Wifi? And it must have Bluetooth and USB ports? Then you have the answer, isn’t it?
Every single capability here would allow your smartphone to connect with a remote host which is great news. You can have 2 or 4 parallel paths for communication… but who takes the data from one end to another? Same old TCP and one TCP session can only use one of the paths available. Then how are you going to utilize the other connected flows and what about the scenarios where on the move you disconnect/re-connect to a given network? Let us understand some realistic picture around Mobile. Mobile data usage in year 2012 was nearly 12 times the data volume for connected internet in year 2000. Mobile devices have crossed the number of connected devices by large fold and it is growing at explosive rate. Hence we need to relook at how data transfer worked in fixed line internet versus how it should work with the Mobile devices around.
With the understanding of the problem at hand, let us consider how different data transfer would be if the TCP session could make use of all the available paths in between 2 hosts. Let us quickly list down the benefits of such a scenario:
– Better performance at network layer
– Utilizing the best path which is least congested
– Using the path which has highest bandwidth for bulk transfer
– Much robust App connectivity while the device loses one path
– Causing least trouble to already crowded and congested paths
– Finally much better end user experience… causing the Wow factor
Looks great and something we should move towards but how? Should we build a new protocol suite as TCP is not designed to take care of such scenarios? Will that be acceptable, what happened to SCTP adoption across internet? Many times a great technology does not get similar acceptance because it is trying to change the basics which is used by billions of devices.
Interesting problem and would leave it to you for little bit of brainstorming… stay tuned for the next blog where we take the discussion to next level…