New Route from SLAC to BINP/Novosibirsk Network logo

Les Cottrell. Page created: May 31, 2005

Central Computer Access | Computer Networking | Network Group | More case studies
SLAC Welcome
Highlighted Home
Detailed Home


Since 2001 BINP, KEK and SLAC have been sharing the cost of a dedecated link from BINP to KEK. In April 2002 having identified congestion, this link was upgraded from 256kbps to 512kbps. This link has worked very satisfactorily providing reliable, consistent service. However, it is costly and low performance by today's standards. At the International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science HEPDG Workshop 2005, Daegu, Korea, May 23-27, 2005, Les Cottrell raised the question to Greg Cole of GLORIAD whether BINP could use GLORIAD instead of the dedicated link to communicate with SLAC, KEK and ESnet. Greg thought this would be possible and after communications with his Russian and Chinese partners confirmed the possibility. Following this exchange the experts in China, Russia, KEK and ESnet worked to change the routing.


On May 31st 2005, Joe Burrescia of ESnet announced that he had changed the ESnet routing to prefer GLORIAD for The routes from SLAC to BINP change between 00:45 and 00:55 5/31/2005 PDT, (see the traceroute analysis) then returned to its original route between 1:25 and 1:35am, and then back to the new route between 8:45 and 8:55am. The new route from SLAC now goes West to East instead of East to West (i.e. via Amsterdam rather than via KEK). It has more hops but appears to be slightly shorter in RTT (250-300ms vs 370ms) (see the Ping RTTs, maybe this is due to faster clocking rates and also less congestion. Also the available bandwidth (measured by packet pair dispersion techniques) went up from about 500kbps to a few hundred Mbits/s (see ABwE plots)!

On June 3, 2005: Les Cottrell accessed the host at BINP. It is an OPenBSD host:

rainbow:cottrell {159} uname -a
OpenBSD 3.4 GENERIC#0 i386
He installed iperf and did a traceroute that shows the route went from BINP to Moscow, to Amsterdam to Chicago to ESnet to SLAC. The ABwE packet pair measurements show that the available bandwidth was over 100Mbits/s.

Running iperf at BINP to SLAC with an 8MB window yielded an achievable throughput of only about 470 kbits/s.

I ran an iperf server on

rainbow:cottrell {158}bin/iperf -s -w 10m -p 5011 &
and tried to access it as client from
0cottrell@iepm-resp:~>iperf -c -p 5011 -t 20 -i 5 -w 8m
Client connecting to, TCP port 5011
TCP window size: 16.0 MByte (WARNING: requested 8.00 MByte)
[  3] local port 33052 connected with port 5011
      Interval       Transfer     Bandwidth
[  3]  0.0- 5.0 sec  12.0 MBytes  20.1 Mbits/sec
[  3]  5.0-10.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 10.0-15.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 15.0-20.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 20.0-25.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 25.0-30.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 30.0-35.0 sec  0.00 Bytes  0.00 bits/sec
Eventually the server replied:
rainbow:cottrell {159} [  6]  0.0-213.4 sec  12.0 MBytes    472 Kbits/sec
Given the iperf strange results Stanislaus Shalunov of Internet2 installed thrulay on We (Stanislaus and Les) then used thrulay to measure throughput from and achieving about 7Mbits/s. We achieved similar results in the opposite direction. Since the window size at is 222KBytes, a maximum throughput of about 7Mbits/s is about right for an RTT of about 250msecs.

Further investigation Dec 2005

In preparation for turning off the 512kbps link from BINP to KEK we made some more studies on this link.

The minimum ping RTT varies between two values of 250ms and 300ms. It stays for long periods at one of these values then switches to the other. It depends on the routing and where the peering between ESnet and GLORIAD occurs. Currently the route from SLAC is via ESnet to Chicago, then to Amsterdam, Moscow and Russia. The route from Novosibirsk goes via RBnet to Khabarovsk then via KoreaNet (134.75.108.x) through PacificWave to CENIC, Stanford and SLAC.

The TCP window sizes on are set to:

net.inet.tcp.sendspace = 16384
net.inet.tcp.recvspace = 16384
The bandwidth*delay product (BDP) for 80Mbits/s is about 3MB, for 800Mbits/s it is about 30MB, so the above values for rainbow are about a factor 200 too small.

Looking at the performance from SLAC to BINP, Novosibirsk. Before Gloriad (BG?) we had a capacity of 512kbits/s between KEK and BINP. Now with a single TCP stream we get an achievable throughput (measured by thrulay) of up to 6 Mbits/s. Looking at these throughputs there is considerable diurnal variation with achievable throughputs dropping to << 1 Mbits/s. The diurnal variations can also be observed in the ping RTTs.

If we push the throughput by using multiple (say 80 using iperf) parallel TCP streams then we can get around 20-25Mbits/s. Using pathchirp's packet pair dispersion technique to estimate available bandwidth then we get 60-120 Mbits/s.

Thus achievable throughput has gone up by a factor of 10, and if we wish to push it and be less fair then by a factor of ~40-50. We also need to increase the TCP window size at

Page owner: Les Cottrell