Throughput to Taiwan, May 2006Les Cottrell (SLAC) and Trey Chen (TWAREN), Page created: May 15, 2006
Central Computer Access | Computer Networking | Network Group | More case studies
We thus used iperf with a window size of 500KBytes and 40 streams to send TCP data from SLAC to TWAREN. The iperf command used was:
iperf -c iepmbw.twaren.net -P 40 -w 500k -t 360 -p 5000 -i 5 | tee iperf-slac-twarenThe results showed that we were able to avergae about 420Mbits/s over a period of 6 minutes and there were sustained periods of over 500Mbits/s with a peak (over 5 seconds of about 550 Mbits/s. It was also apparent that many of the losses were synchronized across multiple parallel streams.
[iepm@iepmbw ~]$ ping iepm-bw.cesnet.cz PING iepm-bw.cesnet.cz (188.8.131.52) 56(84) bytes of data. 64 bytes from iepm-bw.cesnet.cz (184.108.40.206): icmp_seq=0 ttl=54 time=358 ms 64 bytes from iepm-bw.cesnet.cz (220.127.116.11): icmp_seq=1 ttl=54 time=358 ms 64 bytes from iepm-bw.cesnet.cz (18.104.22.168): icmp_seq=2 ttl=54 time=358 ms 64 bytes from iepm-bw.cesnet.cz (22.214.171.124): icmp_seq=3 ttl=54 time=358 ms 64 bytes from iepm-bw.cesnet.cz (126.96.36.199): icmp_seq=4 ttl=54 time=358 ms --- iepm-bw.cesnet.cz ping statistics --- 6 packets transmitted, 5 received, 16% packet loss, time 5028ms rtt min/avg/max/mdev = 358.634/358.754/358.858/0.389 ms, pipe 2 [iepm@iepmbw ~]$IEPM-BW pathchirp measurements from TWAREN to CESnet indicated an available bandwidth of about 475Mbits/s. We ran iperf from TWAREN to CESnet with a window size of 1MByte as follows:
iperf -c iepmbw.twaren.net -P 40 -w 500k -t 360 -p 5000 -i 5 | tee iperf-twaren-cesnetLooking at the results we were able to average 465 Mbits/s over the 6 minutes with a peak measured over 5 seconds of 518 Mbits/s.
iperf -c 188.8.131.52 -p 5000 -t 300 -w $window -P $streamwhere: window * $stream = 5, 7.5, 10, 12.5, 18.5, 25, 37.5, or 50 (MBytes), when $stream = 2, 4, 8, 16, 32, 64, and 126; and
thrulay 184.108.40.206 -p 5003 -i 30 -t 300 -w $window -m $streamwhere: $window * $stream = 5, 7.5, 10, 12.5, 18.5, 25, 37.5, or 50 (MBytes), when: $stream = 2, 4, 8, 16, 32, 64, and 126 are used for the stream number value. After statistically analyzing the results Trey recommended that the best values of "$window & $stream" for iperf are "2.31M & 8" or "1.16M & 16," and for thrulay,"781K & 32" or "391K & 64" or "195K & 126."
Trey repeated the above tests with 20 second measurement times, and halving the window sizes requested to allow for Linux doubling the window sizes. the results are shown here. After statistically analyzing these results and taking Les's suggestion (use the same window and stream settings fro iperf and thrulay) into consideration, Trey suggested that letting $stream=126 & $window=149K is better. So the commands that should be used to perform iperf & thrulay probing tests between our sites are:
1. for iperf : " iperf -c 220.127.116.11 -p 5000 -t 20 -i 5 -w 149K -P 126 ," 2. for thrulay : " thrulay 18.104.22.168 -p 5003 -t 20 -i 5 -w 149000 -m 126 ."Note: you can not use 149K as thrulay's window size value.
Les responded: Even though you get the best performance over 20 seconds with say 120 streams, I do not recommend this. It can lead to instabilties in throughput (see how much the throughput can vary at 120 streams) and in behavior of the streams (e.g. one of the streams can be starved and get little throughput and take a long time to finish, or there may be OS/system probs), and it may be regarded as unfair etc. I would prefer to set the number of streams to close to the top of the knee of the curves, say at 32 streams. That also has the nice feature that it tyically only uses about 90% of the achievable bandwidth, thus at least leaving something for others.I admit there can be instability in the throughputs at 32 streams, but I feel it is a better choice.