Client and server, located in different countries, are exchanging a huge file and they need to accomplish this task in short time. Additionally, these two actors are connected via a Gigabit interface and so they expect to gain the same exact amount of speed during the transfer. But they don’t.
While investigating the path throughout the network, all the proper checks about any CRC or QoS drops have been conducted and all the links appeared to be clean and nodes free from any congestion. So what´s the next step? Let’s capture some traffic at one of the two host side, and check whether the TCP receiver allows the sender to transmit at full link speed.
From wireshark trace we discover that the window size of the receiver is about 512KB (KBytes) So, what is the maximum throughput we could achieve with this TCP window size?
Here is the formula!
<strong>BW in bit per seconds</strong> = (tcp.win.size * 8) / RTT in seconds
hence with our 512KB window size and a RTT of 30 ms we get:
BW bps = 512*8 / 0.030 = 136533000 bps = <strong>136.5 mbps</strong>
Not the best result when you have a full gigabit available, isn’t it?
So, why the receiver is sending such a small window size even if it could speed up more?
The TCP receiving window size, is a parameter set by the operating system, thus possibly imposed as fixed value. However, looks like is not the case, since from Linux kernel 2.6.8, TCP window scaling is dynamic, which means that TCP process will take all the available memory allocated by the operating system memory manager.
Moral of the story: if you sense that you are running slow at data transferring, check if TCP window size value is too small and then start investigate if your host, physical or virtual, has enough room in its memory to feed your traffic demand.
ac41a23 @ 2019-11-20