Google technicians want an overhaul of the Web’s TCP (Transmission Control Protocol) transport layer and are suggesting ways to reduce latency and make the Web faster.
The company’s “Make the Web Faster” team is making several recommendations to improve TCP speed, including increasing the TCP initial congestion window. In a blog post on Monday, team member Yuchung Cheng called TCP “the workhorse of the Internet,” designed to deliver Web content and operate over a range of network types. Web browsers, he said, typically open up parallel TCP connections ahead of making actual requests.” This strategy overcomes inherent TCP limitations but results in high latency in many situations and is not scalable,” he said. “Our research shows that the key to reducing latency is saving round trips. We’re experimenting with several improvements to TCP.”
Recommendations include increasing the TCP initial congestion window. “The amount of data sent at the beginning of a TCP connection is currently three packets, implying three round trips to deliver a tiny, 15K-sized content. Our experiments indicate that IW10 [initial congestion window of 10 packets] reduces the network latency of Web transfers by over 10 percent,” Cheng said. Google also wants the initial timeout reduced from three seconds to one second. “An RTT [round-trip time] of three seconds was appropriate a couple of decades ago, but today’s Internet requires a much smaller timeout.”
Google’s suggestions, said IDC analyst Al Hilwa, “appear to be well-researched recommendations and if implemented broadly will yield significant improvements in practically everyone’s network performance and latency. The issue is that the capability has to be broadly implemented to achieve the desired performance gains. Of course new TCP/IP stacks would work with the old ones as they would now, but when two sides of a connection have the improvements, the benefits should surface.”
Google also is encouraging use of the Google-developed TCP Fast Open protocol, which reduces application network latency, and proportional rate reduction (PRR) for TCP. “Packet losses indicate the network is in disorder or is congested. PRR, a new loss recovery algorithm, retransmits smoothly to recover losses during network congestion. The algorithm is faster than the current mechanism by adjusting the transmission rate according to the degree of losses. PRR is now part of the Linux kernel and is in the process of becoming part of the TCP standard,” Cheng said.
Also, Google is developing algorithms to recover faster on “noisy” mobile networks, said Cheng.
Google’s TCP work is open source and disseminated through the Linux kernel, IETF standards proposals, and research publications to encourage industry involvement, Cheng noted.