On December 18, 2000, we received email from Joe Izen of UT Dallas, saying:
> Is it understood why Xinchou only sees 144KB/s on large transfers
> between UTD and SLAC? I would have expected much better of
> Internet2. Thanks! -Joe
On February 6th 2001, Joe sent another email:
I just (Tue Feb 6 08:36:10 CST 2001) observed about a 2minute freeze
of the Internet2 connection between UTD and SLAC. It interupted
xterm and emacs. Actually, I am emacs'ing locally a file in an
afs-exported volume at slac. emacs loops, sucking in %0% of
morticia's cpu when this happens.
I've noticed this a few times/day over the past week. Is the cause understood?
In general, the network connection has not been as snappy lately as
it has in the past few months when doing interactive sessions.
path characterization from SLAC to UT Dallas shows
that there is about
50msec. round trip time (RTT) between SLAC and UT Dallas.
It is also seen that the route is mainly via Internet 2, and the
bottlneck (excluding the initial measuring host's (flora05)
is indicated to be about 45Mbps (possibly a T3 link to the UT Dallas campus).
The traceroute from UTDalls to SLAC indicates
taht the main delay is added between 188.8.131.52 and losa-hstn.abilene.ucaid.edu
which is probabky the long haul from the Bay area to Houston Texas, and so
is to be expected. The lack of response at hops 10 and 11 is since the internal
SLAC routers are configured not to respond for security reasons.
By comparing with the
traceroute from SLAC to UTDallas it is seen that
the routes are failry symmetric.
The pingroute from SLAC to UTDallas indicates that
the losses start between 184.108.40.206 and shot.utdallas.edu.
The pingroute from UTDallas to SLAC
indicates that the packet loss starts (for small 100 Byte packets)
between the 1st and 2ns hop, i.e. between utdgw16.utdallas.edu and 220.127.116.11.
Unfortunately performance between SLAC an UT Dallas
is not measured by either
We started monitoring with PingER in December, 2000. The
PingER RTT and loss graph indicates that the
packet loss started around January 16th, 2001.
at UTDallas and ran a server there. We then ran the client at the SLAC end
using the methodology outlined in
Bulk Throughput Measurements.
With iperf we were able to achieve a best throughput of 7.3Mbits/s
(0.9MBytes/s), and an average throughput of 2.5Mbits/s (308kBytes/s).
was very variable with measurements with similar number of streams and
flows made within minutes of each other, differing by factors of 10. The
standard deviation of the throughput measurements was 2 Mbits/s and the IQR
was 3Mbits/sec. This variability is presumed to be due to the variability
in the cross-traffic.
With only one flow we were only able to achieve a throughput
of about 50kbits/s. The loss rate was about 4.5% for the loaded case, and about
1.4% for the unloaded case.
Using bbftp (an FTP application
that support multiple streams and large windows)
Itaru Kitayama of UTDallas was able to get ~500kBytes/s with 3 streams and
Using the ns-2 network simulator we are able to predict
throughput/goodput. For this link we set:
The predicted goodput was 927kbits/s (909kbits/sec) or
Setting the window size to 64kBytes increased the goodput to
6.7Mbits/s (4.2Mbits/s) or 836kBytes/s (528kBytes/s).
Increasing the number of flows to 10 and
maintaining the window size at 64kBytes
the goodput predicted is 40Mbits/s (42Mbits/s) or 5MBytes/s (5.3MBytes/s).
- the bottleneck queue length to 80 packets;
- the run time to 10 seconds;
- the bottleneck bandwidth to 45 Mbits/s;
- the RTT to 48 msec.;
- the bottleneck factor to 2 (this is the traffic the simulator
puts into the bottleneck is (Bottleneck_bandwidth/Flows)*Bottleneck_factor.
Thus if the Bottleneck_factor=2, the aggregate bandwidth into the
bottleneck is twice the capacity. We set the bottleneck
factor to 2 since we guessed that the host offering the traffic had a 100Mbit/s
fast Ethernet link (the goodput numbers in parentheses below are for
a bottleneck factor of 1);
- the window size to 8kBytes (the default Solaris window size);
- the number of flows to 1.
For more information on achieving high throughput/goodput see:
measurements, and to see how well the simulator agrees with observed
Bulk throughput simulation.
We noted on February 9, 2001 that that the
loss appeared to have
decreased. Ping indicated that the losses were tnow about
----WWW4.SLAC.Stanford.EDU PING Statistics----
285 packets transmitted, 277 packets received, 2% packet loss
round-trip (ms) min/avg/max = 49/76/273
----morticia.utdallas.edu PING Statistics----
135 packets transmitted, 133 packets received, 1% packet loss
round-trip (ms) min/avg/max = 49/68/164
The improvement appeared to happen at about 11:00 February 7, 2001 GMT.
Since then the RTT appears to be more variable, typically this
indicates more congestion. The routes in both directions were
measured 1t 9:15am February 9, 2001, to be identical to those measured
when the heavy losses were occuring.
On March 1, 2001 Joe Izen enquired whether the 2% lost packet
problem was tracked down?. Les Cottrell ran 1 second separated
pings for an hour (3600 pings) from SLAC to morticia.utdallas.edu
across lunchtime 3/1/01 and observed no losses. The
PingER RTT and Loss plot for Dec 2000
through Feb 2001 indicates the problem went away on
February 11, 2001.
On September 23, 2001, we measured the iperf TCP throughput from
pharlap.slac.stanford.edu to morticia.utdallas.edu. The top 10% throughputs
are above 41Mbits/s. The pipechar indictaes that
the bottelenck is probably a T3 link (43Mbps).
On November 10, '01 iperf performance from SLAC to Dallas
appeared very variable with
2 plateaus at ~ 40Mbits/s and 10 to 20Mbits/s.
- Pipechar shows the route from pharlap to morticia
goes via ESnet. The bottlenck appears to be about 40Mbps. On some of the pipechars a bottleneck
appears between hops 8 & 9.
The route from morticia
goes via Internet 2. So the routes are asymmetric. This asymmetry was introduced on
November 5th, 2001, when pharlap was connected to a test OC12 link from ESnet in preparation
for SC2001. This should not be a problem.
- The maximum window buffers on morticia appear to be OK:
ndd /dev/tcp tcp_max_buf = 1048576
;ndd /dev/tcp tcp_cwnd_max = 262144
;ndd /dev/tcp tcp_xmit_hiwat = 8192
;ndd /dev/tcp tcp_recv_hiwat = 8192
The window scale factors also appear to be set correctly. To verify this,
I ran snoop (a tcpdump like Solaris tool) on pharlap and then from another window did an
iperf -c morticia.utdallas.edu -w 1024k -t 1
and looked at the opening packets:
7cottrell@pharlap:~>snoop -i /tmp/utd-dump1 | head
1 0.00000 PHARLAP.SLAC.Stanford.EDU -> morticia.utdallas.edu TCP D=5000 S=37767 Syn Seq=1595573285 Len=0 Win=32804 Options=
2 0.04673 morticia.utdallas.edu -> PHARLAP.SLAC.Stanford.EDU TCP D=37767 S=5000 Syn Ack=1595573286 Seq=4184395739 Len=0 Win=45 Options=
As can be seen the wscale factors are 5 which should allow a big (> 1MB) window.
- We also looked at the cpu load on morticia using the Unic top command. It did not show heavy utilization during
last pid: 17552; load averages: 0.12, 0.08, 0.05 21:31:38
144 processes: 141 sleeping, 1 zombie, 2 on cpu
CPU states: 87.5% idle, 0.4% user, 12.1% kernel, 0.0% iowait, 0.0% swap
Memory: 1536M real, 157M swap in use, 2289M swap free
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
17199 cottrel 23 59 0 5824K 5456K cpu1 0:15 3.46% iperf
17550 cottrel 1 58 0 2672K 1912K cpu0 0:00 0.17% top
207 root 5 58 0 3080K 1760K sleep 119:34 0.00% automountd
- The performance varies dramatically from time to time. Below is a plot of 10 second iperf throuput with a
256K window and multiple strean sizes. It can be seen that the measurements at a fixed window and stream
vary a lot from measurement to measurement. This may indicate congestion at times due to cross-traffic.
The window/stream sizes we were using for the regular measurements is 256KB by 8 streams, which looks like a good choice.
Page owner: Les Cottrell