I was thinking about peer-to-peer networking (in the context of Pettycoin, of course) and I wondered if sending ~1420 bytes of data is really any slower than sending 1 byte on real networks.Â Similarly, is it worth going to extremes to avoid crossing over into two TCP packets?
So I wrote a simple Linux TCP ping pong client and server: the client connects to the server then loops: reads until it gets a ‘1’ byte, then it responds with a single byte ack.Â The server sends data ending in a 1 byte, then reads the response byte, printing out how long it took.Â First 1 byte of data, then 101 bytes, all the way to 9901 bytes.Â It does this 20 times, then closes the socket.
Here are the results on various networks (or download the source and result files for your own analysis):
On Our Gigabit Lan
Interestingly, we do win for tiny packets, but there’s no real penalty once we’re over a packet (until we get to three packets worth):
On Our Wireless Lan
Here we do see a significant decline as we enter the second packet, though extra bytes in the first packet aren’t completely free:
Via ADSL2 Over The Internet (Same Country)
Ignoring the occasional congestion from other uses of my home net connection, we see a big jump after the first packet, then another as we go from 3 to 4 packets:
Via ADSL2 Over The Internet (Australia <-> USA)
Here, packet size is completely lost in the noise; the carrier pidgins don’t even notice the extra weight:
Via 3G Cellular Network (HSPA)
I initially did this with Wifi tethering, but the results were weird enough that Joel wrote a little Java wrapper so I could run the test natively on the phone.Â It didn’t change the resulting pattern much, but I don’t know if this regularity of delay is a 3G or an Android thing.Â Here every packet costs, but you don’t win a prize for having a short packet:
Via 2G Network (EDGE)
This one actually gives you a penalty for short packets!Â 800 bytes to 2100 bytes is the sweet-spot:
So if you’re going to send one byte, what’s the penalty for sending more?Â Eyeballing the minimum times from the graphs above:
|Penalty for filling packet||30%||Â 15%||Â 5%||Â 0%||Â 0%*|
|Penalty for second packet||30%||Â 40%||Â 15%||Â 20%||Â 0%|
|Penalty for fourth packet||60%||Â 80%||Â 25%||Â 40%||Â 25%|
* Average for EDGE actually improves by about 35% if you fill packet
Turn off 802.11b support on your router and the wireless overhead should drop. There is a period in between each packet where the router looks for 802.11b devices and that wastes time if there are none.
Hmm, I was on 5GHz. Turned of 2.4GHz altogether, and can’t see any difference.
For your DSL tests it will be more enlightening if you increase the packet size by 1 byte on each round instead of 100. Currently all ADSL lines (I have seen) run on an ATM link layer which will chop each IP packet into an integer number of 48 byte ATM cells (after adding some per-packet overhead), padding the last cell if need be (its slightly more complicated but that is the gist of it). The consequence of this is that the RTT by packetize plot should show nice quantization steps with a step size of 48 bytes. Since the unavoidable headers typically eat up most of a cell as well you will see larger steps at packet boundaries (typically twice the per ATM cell latency). The caveat with this is that depending on your bandwidth the time increment of each “step” is small enough (1.0239ms at 6656 kbps download and 640 kbps upload, 0.23567 ms at 16402 kbps download, 2558 kbps upload) that you might need a larger sample size than 20 to be able to actually see it. (Typically you can see the quantization best with a more robust estimator like the mimimum or the mean of all data points (per packetize) that fall into the 5 to 95% quantiles).
I note that the time frame and effect size of the ATM quantization would be lost in the variance of your measurementsâ€¦.
I would expect there to be a additional hit per flow establishment in some networks, and most notably in 3G.
Comments are closed.