Everyone has experienced a considerable amount of frustration while transferring a fairly large file (IOS image, large text outputs) via TFTP, which seems never-ending.
TFTP relies on UDP so, like any other UDP-based protocol, it should be designed for a fast and non-reliable connection to achieve full optimal bandwidth usage without asking the receiver (as RDP over UDP does).
But if we peek into its RFC, we can discover two really interesting aspects:
"If the server grants the
request, the connection is opened and the file is sent in fixed
length blocks of 512 bytes."
Mmh? This seems to say that TFTP uses fixed-length blocks of about 1/3 of the average MTU: not entirely optimal.
Bu something even scarier comes up:
"Each data packet contains one block of data, and must be acknowledged by an acknowledgment packet before the next packet can be sent."
Does this imply that each transmitted data must be acknowledged 1 to 1 at the application layer?
Yes, and that’s not very efficient at all.
Let’s fire up wireshark and check if a modern TFTP Linux deamon (xinetd) is still complying with the RFC:
And it does: each TFTP block has to take a RTT to be transmitted, in addition to employing a very small datagram size.
TCP, on the other hand, allows us to wait for the receiver ACK only when its window is full, which can go up to quite a sizeable figure.
In this example, the window size is more than 4 MB.
To test this, I have downloaded a 1MB file first via TFTP, and then via FTP.
TFTP transfer took 5.3 seconds, while the FTP only 54 ms.
It is glaringly obvious how much faster FTP is, but if so, why should we even consider using TFTP?
The first letter of TFTP stands for “Trivial” and, as opposed to FTP, requires no authentication, making it such a fire-and-forget protocol.
However, if the desired transfer data is above 10 MB, then FTP should be the first and obvious choice.