International Committee for
Future Accelerators (ICFA)
Standing Committee on
Inter-Regional Connectivity (SCIC)
Chairperson: Professor Harvey
Newman, Caltech
ICFA SCIC Network Monitoring
Report
Prepared by the ICFA SCIC
Monitoring Working Group
On behalf of the Working
Group:
Les Cottrell cottrell@slac.stanford.edu
January 2007 Report of the ICFA-SCIC Monitoring
Working Group
Edited by R. Les Cottrell and Shahryar Khan on behalf of the ICFA-SCIC Monitoring WG
Created January 7, 2007, last update January 15, 2007
ICFA-SCIC Home Page | Monitoring WG Home Page
This report is available from http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan07/
January
2007 Report of the ICFA-SCIC Monitoring Working Group
ICFA/SCIC
Network Monitoring Working Group
Variability
of performance between and within regions
Comparisons
with Economic and Development Indicators
Africa
and South Asia: Comparison between Min and Avg. RTTs
High
Performance Network Monitoring
New
Monitoring and Diagnostic Efforts in HEP
Accomplishments
since last report
Efforts
to Improve PingER Management
TULIP-
IP Locator Using Triangulation
ViPER
(Visualization for PingER)
Digital
Divide Publications/Presentations:
Appendix:
Countries in PingER Database
Internet performance is improving each year with througputs typically improving by 40-50% per year and losses by 25%-45% per year, and for some regions such as S. E. Europe, even more. Geosynchronous satellite connections are still important to countries with poor telecommunications infrastructure and for outlying areas. However, the number of countries with fiber connectivity has and continues to increase and in most cases, satellite links are used as backup or redundant links. In general for HEP countries satellite links are being replaced with land-line links with improved performance (in particular for RTT). On the other side of the coin Internet usage is increasing (see http://www.internetworldstats.com/stats.htm), the application demands (see for example [bcr]) are growing and the expected reliability is increasing, so we cannot be complacent.
In general, throughput measured from within a region is
much higher than when measured from outside. Links between the more developed
regions including N. America[1],
Africa and
There is a strong positive correlation between the Internet performance metrics and various economic and development indices available from the UN and ITU. Besides being useful in their own right these correlations are an excellent way to illustrate anomalies and for pointing out measurement/analysis problems. The large variations between sites within a given country illustrate the need for careful checking of the results and the need for multiple sites/country to identify anomalies. Also given the difficulty of developing the human and technical indicators (at best they are updated once a year and usually much less frequently), having indicators such as PingER that are constantly and automatically updated is a useful complement.
For modern HEP collaborations and Grids there is an increasing need for high-performance monitoring to set expectations, provide planning and trouble-shooting information, and to provide steering for applications.
To quantify and help bridge the Digital Divide, enable world-wide collaborations, and reach-out to scientists world-wide, it is imperative to continue and extend the PingER monitoring coverage to all countries with HEP programs and significant scientific enterprises.
This report may be regarded as a follow on to the May 1998 Report of the ICFA-NTF Monitoring Working Group [icfa-98], the January 2003 Report of the ICFA-SCIC Monitoring Working Group [icfa-03], the January 2004 Report of the ICFA-SCIC Monitoring Working Group [icfa-04], the January 2005 Report of the ICFA-SCIC Monitoring Working Group [icfa-05] and the the January 2006 Report of the ICFA-SCIC Monitoring Working Group [icfa-06].
The current report updates the January 2006 report, but is complete in its own right in that it includes the tutorial information from the previous reports. The main changes in this year’s reports are:
· Figures 1-6, 8-10 and 13 and Tables 1, 2, 4 and 6 have all been updated
· The text related to all the above tables and figures has been updated.
· Sections have been added on:
o
A Case Study for
o LHC-OPN Monitoring
o Related HEP research
o Tools to manage, analyze and visualize the PingER data
§ PingER Host search tool
§ PingER Executive plots
§ ViPER visualization
§ PingER data validation and management
§ Tools to validate the PingER data
· Figures 22-25 are new.
· We have updated the section on PingER publications and talks.
The formation of this working group was requested at the ICFA/SCIC meeting at CERN in March 2002 [icfa-mar02]. The mission is to: Provide a quantitative/technical view of inter-regional network performance to enable understanding the current situation and making recommendations for improved inter-regional connectivity.
The lead person for the monitoring working group was identified as Les Cottrell. The lead person was requested to gather a team of people to assist in preparing the report and to prepare the current ICFA report for the end of 2002. The team membership consists of:
Table 1: Members of the ICFA/SCIC Network Monitoring team
Les Cottrell |
SLAC |
US |
cottrell@slac.stanford.edu |
|
|
|
|
Sergei Berezhnev |
RUHEP, |
|
sfb@radio-msu.net |
Sergio F. Novaes |
FNAL |
|
novaes@fnal.gov |
Fukuko Yuasa |
KEK |
|
Fukuko.Yuasa@kek.jp |
Shawn McKee |
|
I2 HEP Net Mon
WG |
· Obtain as uniform picture as possible of the present performance of the connectivity used by the ICFA community
· Prepare reports on the performance of HEP connectivity, including, where possible, the identification of any key bottlenecks or problem areas.
There are two complementary types of Internet monitoring reported on in this report.
1. In the first we use PingER [pinger] which uses the ubiquitous "ping" utility available standard on most modern hosts. Details of the PingER methodology can be found in the May 1998 Report of the ICFA-NTF Monitoring Working Group [icfa-98] and [ejds-pinger]. PingER provides low intrusiveness (~ 100bits/s per host pair monitored[2]) Round Trip Time (RTT), loss, reachability (if a host does not respond to a set of 21 pings it is presumed to be non-reachable). The low intrusiveness enables the method to be very effective for measuring regions and hosts with poor connectivity. Since the ping server is pre-installed on all remote hosts of interest, minimal support is needed for the remote host (no software to install, no account needed etc.)
2.
The second method (IEPM-BW [iepm], perfSONAR
[perfSONAR] etc.) is for measuring high network and application throughput
between hosts with excellent connections. Examples of such hosts are to be
found at HEP accelerator sites and tier 1 and 2 sites, major Grid sites, and
major academic and research sites in N. America2,
The PingER data and results extend back to the start of 1995. They thus provide a valuable history of Internet performance. PingER has over 30 monitoring nodes in 14 countries, that monitor over 700 remote nodes at over 600 sites in around 120 countries (see PingER Deployment [pinger-deploy]). These countries contain over 89% of the world's population (see Table 2) and over 99% of the online users of the Internet. Most of the hosts monitored are at educational or research sites. We try and get at least 2 hosts per country to help identify and avoid anomalies at a single host, although we are making efforts to increase the number of monitoring hosts to as many as we can. The requirements for the remote host can be found in [host-req]. Fig. 1 below shows the locations of the monitoring and remote (monitored sites).
Figure 1: Locations of PingER
monitoring and remote sites as of Jan 2007.
There are over two thousand monitoring/monitored-remote-host pairs, so it is important to provide aggregation of data by hosts from a variety of "affinity groups". PingER provides aggregation by affinity groups such as HEP experiment collaborator sites, Top Level Domain (TLD), Internet Service Provider (ISP), or by world region etc. The world regions, as defined for PingER, and countries monitored are shown below in Fig. 2. The regions are chosen starting from the U.N. definitions [un]. We modify the region definitions to take into account which countries have HEP interests and to try and ensure the countries in a region have similar performance.
Figure 2: Major regions of the world for PingER aggregation by regions
More details on the regions are provided in Table 2 that highlights the number of countries monitored in each of these regions, and the distribution of population in these regions.
Table 2: PingER Monitored Countries and populations by region Jul-Dec
2006
Regions |
# of
Countries |
% of
World Population |
|
32 |
12 |
|
9 |
2 |
|
22 |
7 |
|
16 |
7 |
|
2 |
5 |
|
4 |
22 |
|
6 |
6 |
South |
6 |
1 |
|
5 |
22 |
|
8 |
3 |
|
4 |
0.5 |
|
1 |
2 |
Total |
115 |
89.2 |
To assist in interpreting the results in terms of their impact on well-known applications, we categorize the losses into quality ranges. These are shown below in Table 3.
Table 3: Quality ranges used for loss |
||||||
|
Excellent |
Good |
Acceptable |
Poor |
Very Poor |
Bad |
Loss |
<0.1% |
>=0.1% & |
> =1% |
>= 2.5% |
>= 5% |
>= 12% |
More on the effects of packet loss and RTT can be found in the Tutorial on Internet Monitoring & PingER at SLAC [tutorial], briefly:
· At losses of 4-6% or more video-conferencing becomes irritating and non-native language speakers become unable to communicate. The occurrence of long delays of 4 seconds (such as may be caused by timeouts in recovering from packet loss) or more at a frequency of 4-5% or more is also irritating for interactive activities such as telnet and X windows. Conventional wisdom among TCP researchers holds that a loss rate of 5% has a significant adverse effect on TCP performance, because it will greatly limit the size of the congestion window and hence the transfer rate, while 3% is often substantially less serious, Vern Paxson. A random loss of 2.5% will result in Voice Over Internet Protocols (VOIP) becoming slightly annoying every 30 seconds or so. A more realistic burst loss pattern will result in VOIP distortion going from not annoying to slightly annoying when the loss goes from 0 to 1%. Since TCP throughput for the standard (Reno based) TCP stack goes as 1/(sqrt(loss) [mathis]) (see M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm",Computer Communication Review, volume 27, number 3, pp. 67-82, July 1997), it is important to keep losses low for achieving high throughput.
· For RTTs, studies in the late 1970s and early 1980s showed that one needs < 400ms for high productivity interactive use. VOIP requires a RTT of < 250ms or it is hard for the listener to know when to speak.
It must be understood that these quality designations apply to normal Internet use. For high performance, and thus access to data samples and effective partnership in distributed data analysis, much lower packet loss rates may be required.
Of the two metrics loss & RTT, loss is more critical
since a loss of a packet will typically cause timeouts that can last for
several seconds, moreover, RTT increases with increase in distance between any
two nodes and also, with the increase in the number of hops, whereas loss is
less distance dependent. For instance RTT between a node at SLAC and somewhere
in
Figure 3: Jul-Dec ’06 median average monthly packet loss seen from SLAC to the world. |
Fig. 3 shows a snapshot of the median average monthly losses
from SLAC to the world between July and December 2006. We observe that most
countries have low (< 1%) losses, with most of the poor or worse performance
being confined to
Another way of looking at the losses is to see how many hosts fall in the various loss quality categories defined above as a function of time. An example of such a plot is seen in Fig 4.
|
Figure 4: Number of hosts measured from SLAC for each quality category from February 1998 through December 2006. |
It can be seen that recently most sites fall in the good quality category. The numbers at the bottom indicate the percentage of total sites that see good or better packet loss at the start of the year. Also the number of sites with good quality has increased from about 55% to about 75% in the 9 years displayed. The plot also shows the increase in the total number of sites monitored from SLAC over the years. The improvements are particularly encouraging since most of the new sites are added in developing regions.
Towards the end of 2001 the number of sites monitored started
dropping as sites blocked pings due to
The increases in monitored sites towards the end of 2002
and early 2003 was due to help from the Abdus Salam Institute of Theoretical
Physics (ICTP). The ICTP held a Round Table meeting on Developing
Country Access to On-Line Scientific Publishing: Sustainable Alternatives
[ictp] in
The increases in 2004 were due to adding new sites
especially in Africa, S. America,
In 2005, the Pakistan Ministry Of Science and Technology
(MOST) and the US State Department funded SLAC and the National University of
Sciences and Technology’s (NUST) Institute of Information Technology (NIIT) to
collaborate on a project to improve and extend PingER. As part of this project
and the increased interest from Internet2 in “Hard to Reach Network Places”
many new sites in the South Asia and
Again in 2006 in preparation for a conference on Sharing Knowledge Across the
Mediterranean at ICTP Trieste Nov 6-8, 2006, we added many new sites
especially in
Fig. 5 below shows the long term trends for the various
regions as seen from
Figure 5:
Packets loss trends from
The following general observations can be made for the losses:
· For most regions the improvement in losses is typically between 25% and 45% per year.
·
The better regions (N. America, Europe, E. Asia
and Oceania) are achieving better than 1% packet for most of their sites seen
from
·
Fig. 6 shows the world's connected population fractions obtained by dividing countries up by loss quality seen from the US, and adding the connected populations for the countries (we obtained the population/country figures from "How many Online" [nua] for 2001 and from CIA World Factbook for 2006 [cia-pop-figures]).
Figure 6:
Fraction of the world's connected population in countries with measured loss
performance in 2001 and Dec 2006
It can be seen that in 2001, <20% of the population lived in countries with acceptable or better packet loss. By December 2006 this had risen to 53%. The coverage of PingER has also increased from about 50 countries at the start of Jan 2001 to over 120 in December 2006. This in turn reduced the fraction of the connected population for which PingER has no measurements from 49% to 10%. The results are even more encouraging when one bears in mind that the newer countries being added typically are from regions that have traditionally poorer connectivity.
It is interesting to compare the packet losses seen by various regions with those seen by residential DSL customers in the San Francisco Bay Area. This is shown in Fig. 7 below.
Figure 7:
Losses from SLAC to various world regions compared with that for residential
customers in the San Francisco Bay Area.
There are limits to the minimum RTT due to the speed of
light in fibers or electricity in copper. Typically today, the minimum RTTs for
terrestrial circuits are about 2 * distance / ( 0.6 * c), or roughly
100km/ms (RTT time,) where c is the velocity of light, the factor of
2 accounts for the round-trip, 0.6*c is roughly the speed of light in
fibre. For geostationary satellites links, the minima are between 500 and
600ms. For
Fig. 8 below shows the trends of the min-RTT measured from
ESnet sites in the
Figure 8:
Minimum RTT measured from the
Fig. 9 shows the RTT from the
Figure 9: December 06 comparison of Minimum RTT with 2003 and 2000 results
It is seen that
countries such as
We also combine the loss and RTT measurements using throughput = 1460Bytes[Max Transmission Unit]/(RTT * sqrt(loss)) from [mathis]. The results are shown in Fig. 10. The orange line shows a ~40% improvement/year or about a factor of 10 in < 7 years.
Figure 10: Derived throughput as a function of time seen from ESnet sites to various regions of the world. The numbers in parentheses are the number of monitoring/remote host pairs contributing to the data. The lines are exponential fits to the data. |
The data for several of the developing countries only
extends back for only about five years so some care must be taken in
interpreting the long term trends. With this caveat, it can be seen that links
between the more developed regions including N. America,
To assist is developing a less N. American view of the Digital
Divide; we added many more hosts in developing countries to the list of hosts
monitored from CERN in
Figure 11: Derived
throughputs to various regions as seen from CERN. The open square points are
the monthly averages for E. Asia and the plus signs (+) are for |
The slow increases for North America and
The exponential trendline fits to the monthly
averages, though useful for guiding the eye and showing long term trends, can
hide changes such as network upgrades etc. which tend to happen in a stepwise
fashion. To better visualize such major changes in performance we added the
capability to average the data into yearly intervals. This is shown in Fig. 12
where the data is normalized as in Fig. 11 and there is one point/year. The
lines simply are to assist the eye and are smoothed lines to join the points.
By comparing with Fig. 9 it can be seen that there are several instances of
step changes in performance not seen in Fig. 9. In particular note the improved
performance as parts of Latin America moved from satellite to fibre in 2001,
and the impact of the ESnet routing E. Asian (in particular Japanese academic
and research networks) to the
Figure 12: Yearly averaged normalized derived
TCP throughputs from the US to various regions of the world.
The throughput results, so far presented in this report,
have been measured from North America or to a lesser extent from
Table 4: Derived throughputs in kbits/s from
monitoring hosts to monitored hosts by region of the world for December 2006
As expected it can be seen that within regions (the circled
cells) performance is generally better than between regions. Also performance
is better between closely located regions such as Europe and S. E. Europe,
To provide further insight into the variability in performance for various regions of the world seen from SLAC Fig. 13 shows various statistical measures of the losses and derived throughputs. The regions are sorted by the median of the measurement type displayed. Note the throughput graph uses a log y-scale to enable one to see the regions with poor throughput. The countries comprising a region can be seen in Fig. 2.
Figure 13:
25 percentile, median and 75 percentile derived throughputs and losses for
various regions measured from SLAC for Oct-Dec '05
The difference in throughput for N. America and Europe is
an artifact of the measurements being made from N. America (SLAC) which has a
much shorter RTT (roughly between a factor of 2 and 20 times or for the average sites close
to 3 to 4) to N. American than to European sites. Since the derived throughput
goes as 1/RTT this favors N.
Various economic indicators have been developed by the U.N. and the International Telecommunications Union (ITU). It is interesting to see how well the PingER performance indicators correlate with the economic indicators. The comparisons are particularly interesting in cases where the agreement is poor, and may point to some interesting anomalies or suspect data. Also given the difficulty of developing the human and technical indicators (at best they are updated once a year and usually much less frequently), having indicators such as PingER that are constantly and automatically updated is a useful complement.
One such Index that covers many countries and is reasonably up-to-date is the UNDP Human Development Indicator ((see http://hdr.undp.org/reports/global/2002/en/ ). This is a summary measure of human development). It measures the average achievements in a country in three basic dimensions of human development:
· A long and healthy life, as measured by life expectancy at birth
· Knowledge, as measured by the adult literacy rate (with two-thirds weight) and the combined primary, secondary and tertiary gross enrolment ratio (with one-third weight)
·
A decent standard of living, as measured by GDP
per capita (PPP
Fig. 14 shows a scatter plot of the
HDI versus the PingER Derived Throughput for July 2006. Each point is colored
according to the country’s region. A logarithmic fit is also shown. Logarithmic
is probably appropriate since Internet performance is increasing exponentially
in time and the differences between the countries can be related to number of
years they are behind the developing countries, while human development is more
linear. Since the PingER Derived TCP Throughput is linearly proportional to
RTT, countries close to the
|
Figure 14: Comparison of PingER derived throughputs
seen from |
The Network Readiness Index (NRI)
from the Center for International Development,
|
Figure 15: PingER throughputs measured from |
Some of the outlying countries are identified by name. Countries at the bottom right of the right hand graph may be concentrating on Internet access for all, while countries in the upper right may be focusing on excellent academic & research networks.
The Digital Access Index (DAI) created by the ITU combines eight variables, covering five areas, to provide an overall country score. The areas are availability of infrastructure, affordability of access, educational level, quality of ICT services, and Internet usage. The results of the Index point to potential stumbling blocks in ICT adoption and can help countries identify their relative strengths and weaknesses.
|
Figure 16: PingER derived throughput vs. the ITU
Digital Access Index for PingER countries monitored from the |
Since the PingER Derived Throughput
is linearly proportional to RTT, countries close to the
The United Nations Development
Programme (UNDP) introduced the Technology Achievement Index (TAI) to reflects
a country's capacity to participate in the technological innovations of the
network age. The TAI aims to capture how well a country is creating and
diffusing technology and building a human skill base. It includes the following
dimensions: Creation of technology (e.g. patents, royalty receipts); diffusion
of recent innovations (Internet hosts/capita, high & medium tech exports as
share of all exports); Diffusion of old innovations (log phones/capita, log of
electric consumption/capita); Human skills (mean years of schooling, gross
enrollment in tertiary level in science, math & engineering). Fig. 17 shows
December 2003's derived throughput measured from the
|
Figure 17: PingER derived throughputs vs. the UNDP Technology Achievement Index (TAI) |
With NIIT being an important collaborator with SLAC, Caltech and CERN, we
prepared a small case study with 3 PingER monitoring sites in
The Pakistan Education and Research Network (PERN) “is a nationwide educational intranet connecting premiere educational and research institutions of the country. PERN focuses on collaborative research, knowledge sharing, resource sharing, and distance learning by connecting people through the use of Intranet and Internet resources”.
PERN uses the services of NTC
(National Telecommunication Corporation), which is the national telecommunication
carrier for all official/government services in
Table 5: Remote sites in
|
Remote Node |
University Location |
Service Provider |
Traffic Enters the Country Via |
End host location |
|
|
|
|
|
|
1 |
PK.QAU.EDU.N1 |
|
NTC |
|
|
2 |
PK.LSE.EDU.N1 |
|
NTC |
|
|
3 |
PK.NIIT.EDU.N1 |
|
NTC |
|
|
4 |
PK.PIEAS.EDU.N1 |
|
NTC |
|
|
5 |
PK.SIRSYED.SSUET.N1 |
|
NTC |
|
|
6 |
PK.UET.EDU.N1 |
|
NTC |
|
|
7 |
PK.UPESH.EDU.N1 |
|
NTC |
|
|
The minimum RTT from NTC is about 5ms versus about 10-12 ms from NIIT via NTC/PERN. Presumably the extra ~5 ms is a last mile effect. From NIIT via Micronet the minimum RTT is closer to 60ms. This may be partially due to slower backbone links (it takes longer to clock the packets onto the network links) and different routes. Unfortunately we are currently unable to make traceroute measurements from the NTC host.
Looking at the average RTT results seen in Fig.18, there is a lot of variability, typically ranging from 150-400ms for the NIIT NTC/PERN host and 80-180ms for the NIIT Micronet host, and the data points for each remote host track one another closely. This indicates a common point of congestion. The NTC host results are fairly flat for each remote host, thus indicating little congestion. One can also see that the performance to the NIIT NTC/PERN connected host from NTC and from NIIT via Micronet is more variable and poorer than for the other Pakistani sites. This would appear to indicate that the congestion is located close to or at the NIIT site.
Figure 18: Average RTT from three
hosts in
The loss results shown in Fig.19
indicate that NTC has a low loss network with the packet loss percentage being
less than 1% from the NTC monitoring host to the
Figure19: Median packet loss from
3 hosts in
We also prepared a case study on the internet outage in Pakistan during the month of July-05 [pak-fibre] which disconnected the only submarine fibre link (SEAMEW3) for the whole nation of 150 Million for almost a fortnight. Officials of the Internet Service Provider Association of Pakistan (ISPAK) said “the entire country was facing an Internet blackout after a problem occurred at the end of the only Internet backbone provider – PTCL”. The backup satellite links were inadequate to handle the country’s internet traffic. As a result many sites had no international Internet access at all and the few lucky ones (priority was given to call centers) experienced high packet loss and unacceptable performance. There have been several such extended fibre outage incidents (March, June-July, September) in the last year.
It appears that the NTC has an un-congested infrastructure and the minimum RTT from NTC to the PERN connected institutes is good. Adding another hop, the minimum RTT from NIIT being slightly higher is also understandable. However, the minimum RTT for the Micronet link suggests that the traffic, even if going to the same city adds around 45-50 ms to the RTT value as the service provider changes from NTC to Micronet. It also appears that the NIIT default link via NTC/PERN is heavily congested. Recent attempts to upgrade the link from 1 to 1.5Mbits/s have met with limited success.
International connectivity for
It is encouraging to know that NTC
and Micronet appear to provide good backbone Internet services in
In Fig. 20, the main influence on the min-RTT (blue bar) should be the
physical distance between the monitor and the monitored site. Min-RTTs of over
600ms usually indicate that a geo-stationary satellite link is in use. The
shortest min RTTs (the red ellipses) are expected to be between hosts that are
in the same country (e.g. an Indian host monitoring another Indian host). The
difference in the min-RTT and avg-RTT (the red bars) is an indication of
queuing delays or congestion.
It is seen that all sites monitored from
For within country paths between monitoring
and monitored sites,
Even though
Figure 20: Congestion seen from Africa,
In
August 05, with assistance from TENET, we deployed a monitoring node in
The study was presented by Dr. Les
Cottrell at Sharing Knowledge Across the
Mediterranean Conference at ICTP Trieste, Italy in Nov 2006. First we looked at the traceroutes to these
remote sites in African countries from
Figure
21: Routing to various countries in
The
data is typically based on 2 or 3 nodes per country. The initial analysis shows
that the various countries in
Next
we looked at the minimum RTTs to determine satellite usage. Fig. 22 on the left shows the minimum RTTs
measured from
Figure
22: On the left are shown the minimum RTTs from S. Africa to African countries,
and on the right are the current fibre links for
There
is currently only one intercontinental fibre link to Sub-Saharan Africa (SAT-3)
which provides connections to Europe and the
The
East Africa Submarine Cable System (EASSy [EASSy]) Project is an initiative to
connect countries of
Figure
23: Medium Term Plans for African Connectivity
Looking at derived throughput, seen in Fig
24 it is seen that there are enormous differences between and within regions
with over a factor of 30 difference between say
Figure 24: Derived throughput from the
The longer term throughput trends for
Figure 25: Derived TCP throughput trends
from the
While analyzing the data for individual
countries we observed some interesting trends as well. For instance
We also compared the throughputs for
Africa, the Middle East and
Figure 26: Correlation plot between the
UNDP HDI and the median average monthly derived TCP throughput (from Jan.
through Sep. 2005) for Africa, Mediterranean Europe and the
The PingER method of measuring throughput breaks down for high speed networks due to the different nature of packet loss for ping compared to TCP, and also since PingER only measures about 14,400 pings of a given size/month between a given monitoring host/monitored host pair. Thus if the link has a loss rate of better than 1/14400 the loss measurements will be inaccurate. For a 100Byte packet, this is equivalent to a Bit Error Rate (BER) of 1 in 108, and leading networks are typically better than this today (Jan 2006). For example if the loss probability is < 1/14400 then we take the loss as being 0.5 packet to avoid a division by zero, so if the average RTT for ESnet is 50msec then the maximum throughput we can use PingER data to predict is ~ 1460Bytes*8bits/(0.050sec*sqrt(0.5/14400)) or ~ 40Mbits/s and for an RTT of 200ms this reduces to 10Mbits/s.
To address this challenge and to understand and provide
monitoring of high performance throughput between major sites of interest to
HEP and the Grid, we developed the IEPM-BW monitoring infrastructure and
toolkit. There are about 5 major monitoring hosts and about 50 monitored hosts
in 9 countries (CA, CH, CZ, FR, IT, JP,
These measurements indicate that throughputs of several hundreds of Mbits/s are regularly achievable on today's production academic and research networks, using common off the shelf hardware, standard network drivers, TCP stacks etc., standard packet sizes etc. Achieving these levels of throughput requires care in choosing the right configuration parameters. These include large TCP buffers and windows, multiple parallel streams, sufficiently powerful cpus (typically better than 1 GHz), fast enough interfaces and busses, and a fast enough link (> 100Mbits/s) to the Internet. In addition for file operations one needs well designed/configured disk and file sub-systems.
Though not strictly monitoring, there is currently much activity in understanding and improving the TCP stacks (e.g. [floyd], [low], [ravot]). In particular with high speed links (> 500Mbits/s) and long RTTs (e.g. trans-continental or trans-oceanic) today's standard TCP stacks respond poorly to congestion (back off too quickly and recover too slowly). To partially overcome this one can use multiple streams or in a few special cases large (>> 1500Bytes) packets. In addition new applications ([bbcp], [bbftp], [gridftp]) are being developed to allow use of larger windows and multiple streams as well as non TCP strategies ([tsnami], [udt]). Also there is work to understand how to improve the operating system configurations [os] to improve the throughput performance. As it becomes increasingly possible to utilize more of the available bandwidth, more attention will need to be paid to fairness and the impact on other users (see for example [coccetti] and [bullot]). Besides ensuring the fairness of TCP itself, we may need to deploy and use quality of service techniques such as QBSS [qbss] or TCP stacks that back-off prematurely hence enabling others to utilize the available bandwidth better [kuzmanovic]. These subjects will be covered in more detail in the companion ICFA-SCIC Advanced Technologies Report. We note here that monitoring infrastructures such as IEPM-BW can be effectively used to measure and compare the performance of TCP stacks, measurement tools, applications and sub-components such as disk and file systems and operating systems in a real world environment.
PingER and IEPM-BW are excellent systems for monitoring the general health and capability of the existing networks used worldwide in HEP. However, we need additional end-to-end tools to provide individuals with the capability to quantify their network connectivity along specific paths in the network and also easier to use top level navigation/drill-down tools. The former are needed to both ascertain the user's current network capability as well as to identify limitations which may be impeding the user’s ultimate (expected) network performance. The latter are needed to simplify finding the relevant data.
Most HEP users are not "network wizards" and don't wish to become one. In fact as pointed out by Mathis and illustrated in Fig. 27, the gap in throughput between what a network wizard and a typical user can achieve is growing.
Figure 27: Bandwidth achievable by a network wizard and a typical user as a function of time. Also shown are some recent network throughput achievements in the HEP community. |
Because of HEP's critical dependence upon networks to enable their global collaborations and grid computing environments, it is extremely important that more user specific tools be developed to support these physicists.
Efforts are underway in the HEP community, in conjunction with the Internet2 End-to-End (E2E) Performance Initiative [E2Epi], to develop and deploy a network measurement and diagnostic infrastructure which includes end hosts as test points along end-to-end paths in the network. The E2E piPEs project [PiPES], the NLANR/DAST Advisor project [Advisor] and the LISA (Localhost Information Service Agent) [LISA] are all working together to help develop an infrastructure capable of making on demand or scheduled measurements along specific network paths and storing test results and host details for future reference in a common data architecture. The information format will utilize the GGF NMWG [NMWG] schema to provide portability for the results. This information could be immediately used to identify common problems and provide solutions as well as to acquire a body of results useful for baselining various combinations of hardware, firmware and software to define expectations for end users.
A primary goal is to provide as "lightweight" a client component as possible to enable widespread deployment of such a system. The LISA Java Web Start client is one example of such a client, and another is the Network Diagnostic Tester (NDT) tool [NDT]. By using Java and Java Web Start, the most current testing client can be provided to end users as easily as opening a web page. The current version supports both Linux and Windows clients.
Details of how the data is collected, stored, discovered
and queried are being worked out. A demonstration of a preliminary system was
shown at the Internet2 Joint-techs
meeting in
The goal of easier to use top level drill down navigation to the measurement data is being tackled by MonALISA [MonALISA] in collaboration with the IEPM project.
During the last year there has been a concerted effort to deploy and monitor the central data distribution network for the Large Hadron Collider (LHC). This network, dubbed the LHC-OPN (Optical Private Network), is being created to primarily support the data distribution from the CERN Tier-0 to the various Tier-1’s worldwide. In addition, traffic between Tier-1 sites is also allowed to traverse the OPN.
Given the central role this network will play in the distribution of data it is critical that this network and its performance be well monitored. A working group was convened in Fall of 2005 to study what type of monitoring might be appropriate for this network. A number of possible solutions were examined including MonALISA, IEPM-BW/Pinger, various EGEE working group efforts and perfSONAR.
By Spring of 2006 there was a consensus that LHC-OPN monitoring should build upon the perfSONAR effort which was already being deployed in some of the most important research networks. perfSONAR is a standardized framework for capturing and sharing monitoring information, other monitoring systems can be plugged into it with some interface “glue”.
There is also a significant amount of research around managed networks for HEP that is ongoing. There are efforts funded by the National Science Foundation (UltraLight) and Department of Energy (Terapaths, LambdaStation and OSCARS) which are strongly based in HEP. These projects are not primarily focused upon monitoring but all have aspects of their efforts that do provide network information applications. Some of the existing monitoring discussed in previous sections are either came out of these efforts or are being further developed by them.
Recent studies of HEP needs, for example the TAN Report (http://gate.hep.anl.gov/lprice/TAN/Report/TAN-report-final.doc)
have focused on communications between developed regions such as Europe and
The PingER throughput predictions based on the Mathis formula assume that throughput is mainly limited by packet loss. The 40% per year growth curve in Fig. 10 is somewhat lower than the 79% per year growth in future needs that can be inferred from the tables in the TAN Report. True throughput measurements have not been in place for long enough to measure a growth trend. Nevertheless, the throughput measurements, and the trends in predicted throughput, indicate that current attention to HEP needs between developed regions could result in needs being met. In contrast, the measurements indicate that the throughput to less developed regions is likely to continue to be well below that needed for full participation in future experiments.
We have extended the measurements to cover more developing countries and to increase the number of hosts monitored in each developing country. We have carefully evaluated the routes and minimum ping RTTs to verify that hosts are where they are identified to be in our database. As a result we have worked with contacts in relevant countries and sites to find alternatives, and about 20-30 hosts have been replaced by more appropriate hosts. In addition (see Table 6) we have added over 120 new remote hosts, and added 10 new countries (AO, CY, LB, NL, PS, SI, ES, SY, VN, and ZM). At the same time in the last year we are no longer able to find hosts to monitor in 3 countries (BG, CO, SB), in the previous year we lost 10 countries (AU, BE, KY, MO, RE, SA, SC, SL, ES and VN), but recovered one last year (VN). The unreachability of these sites has mainly been caused by ping blocking.
The collaboration between SLAC and the NIIT in
We still spend much time working with contacts to unblock
pings to their sites (for example ~15% of hosts pingable in July 2003 were no
longer pingable in December 2003), to identify the locations of hosts and to
find new hosts/sites to monitor. It is unclear how cost-effective this activity
is. It can take many emails to explain the situation, sometimes requiring
restarting when the problem is passed to a more technically knowledgeable
person. Even then there are several unsuccessful cases where even after many
months of emails and the best of intentions the pings remain blocked. One
specific case was for all university sites in
Even finding knowledgeable contacts, explaining what is needed and following up to see if the recommended hosts are pingable, is quite labor intensive. To assist with this we have created a brochure for PingER describing its purposes, goals and requirements. More recently we have had some success by using Google to search for university web sites in specific TLDs and this year have automated this. The downside is that this way we do not have any contacts with specific people with whom we can deal in case of problems.
With
the increase in the monitoring data of PingER, we initiated efforts to develop
supporting systems for better management and installation of PingER. In all, several
major initiatives have been taken, which are summarized below:
With the growth in the coverage of PingER arises the great
difficulty of keeping track of the changes in the physical locations of
the monitored sites. This might lead to mis-leading conclusions, for instance
our sole monitoring node in
The location of an IP address is being determined using the minimum RTT measured from multiple “landmark” sites at known locations, and triangulating the results to obtain an approximate location. The basic application, prototype deployed at http://www.slac.stanford.edu/comp/net/wan-mon/tulip/ is a java based jnlp application that takes RTT measurements from landmarks to a selected target host (typically at an unknown location) specified by the user and figures out the latitudes and longitudes of the target host. The application is under-development and its algorithm and the provision of landmark sites under constant improvement to make the process reasonably accurate.
TULIP (IP Locator
Using Triangulation) will also utilize the historical min-RTT PingER data in
order to verify the locations of hosts/sites recorded in the PingER
configuration database, and to optimize the choices of parameters used by
TULIP.
Increasing the coverage of PingER in developing countries has been difficult historically because it is hard to find hosts which are geographically located within those countries and do not block pings. PingER host searching tool is an attempt to solve this problem. It completely automates the process of searching for hosts for monitoring within a country by doing the following:
1. It downloads the search results for the required country from Google using its TLD(Top Level Domain).
2. Using regular expressions and pattern matching it searches for hostnames in the results.
3. After compiling a list of unique hostnames, it pings each hostname in turn and filters out those which block pings.
4. Finally it checks the hosts in the filtered list on GeoIptool (http://www.geoiptool.com), further sorting out the hosts which are not geographically located in the target country. It was found that Geoiptool geo-locates hosts with a high degree of accuracy.
5. After going thorough the above steps the tool reports the final filtered list of hosts which are candidates for monitoring in the required country.
It was very
helpful in improving the coverage of PingER in
Since its inception, the size of the PingER project has grown to where it is now monitoring hosts in over 120 countries from about 35 monitoring hosts in 14 countries. With growth in the number of monitoring as well as monitored (remote) nodes, it was perceived that automated mechanisms need to be developed for managing this project. We therefore developed a tool that runs daily and reports on the following:
·
Database errors such as invalid or missing IP
addresses, all hosts have an associated region, each host only appears once in
the database, all hosts have a latitude and longitude, the names of the
monitoring hosts match the labeling of the data gathered from the monitoring
host, each host has a unique IP address.
· The list of beacons are now generated from the database, as is the list of sites to be monitored
· We ping all hosts, those not responding are tested to see if the exist (i.e. they do not repond to a name service request), whether they respond to any of the common TCP ports, if so they are marked as blocking pings. If they do not ping with the IP address we try the name in case the IP address has changed.
· We track how long a host pair (monitor host/remote host) has not successfully pinged and whether the remote host is blocked.
· We keep track of how long we have been unable to gather data from each monitoring host.
· We also compare the minimum RTT for sites within a region with one another and look to see whether any are outside 3-4 standard deviations. This is designed to help find hosts that are not really located in a region (e.g. a proxied elsewhere).
As PingER grew in coverage and scope, a need was felt for software to provide a coherent way of quickly analyzing the trends in the data. PingER Executive Plots is a set of tools aimed at seamlessly integrating a high-level graphical analysis front-end to the raw data being generated. It was developed in Java as a jnlp application and can be run anywhere in the world over the web.
The two main analysis capabilities provided by this suite are:
1. High-level trend analysis of data for metrics like TCP Throughput, Packet Loss, Min/Avg RTT etc for various regions from all PingER monitoring sites.
2. Cumulative quality graphs showing breakup of sites with a particular loss quality within a region.
These analysis capabilities are supplemented by a wide-variety of features that make it a useful tool, including:
1. It allows the user to dynamically download the current data available in the SLAC archives, over the web, for graphing. This means that as soon as the latest data is updated at the SLAC archives, it is available for analysis anywhere.
2. Other features include:
a. Ability to select logarithmic or exponential views of the trendlines.
b. Zoom in/out on a particular time-interval.
c. Adding/removing the lines/points corresponding to specific countries/regions.
d. Dynamic tooltips on the entire chart area displaying information about specific datapoints.
e. Saving a snapshot of the graphs as image files.
f. Viewing and storing the goodness of fit parameters for trendlines in a file.
3. The cumulative quality graphs application enables the user to form their own query consisting of a specific monitoring site, remote region and metric for graphing.
Fig. 28 shows a screen snapshot of exponential trendlines from SLAC to various regions (labeled continent) of the world together with the data points for two of the lines.
Figure
28: Snapshot of PingER Executive Plots in action
With the wide deployment of PingER, ViPER provides an
extremely valuable, eye-catching, overview of PingER's deployment and the
performance between various regions of the world. Since PingER is focusing on
mapping the Digital Divide, it desperately needs simple to use, graphically
engaging tools such as ViPER to grab the attention of politicians, executives
and upper management at funding agencies, and NGOs. ViPER also is valuable to
the management of PingER since it quickly identifies PingER database errors
(e.g. hosts with incorrect latitude/longitude) enabling better quality control.
ViPER has a user friendly interface that allows a user to perform interactive analysis on PingER data. It uses a world map to show the geographic location of all the PingER nodes. When the user selects a particular node, detailed information about that node is shown. Also the links are colored according to their performance so the user can compare the performance of various links.
The user can plot graphs of various metrics such as TCP throughput, minimum RTT and packet loss between a remote monitoring site pair. The graph will show the historical performance of the link. The user can save the analysis information for future reference. A screen shot of ViPER is shown in Fig. 29.
Figure 29: Screen shot of ViPER showing the losses
from CDAC India in Mumbai to CERN in
At the bottom of the screen shot is shown the user selection for the monitoring site and remote site, and the metric, also shown on the map in red are the Monitoring sites and in green the remote sites. A colored line from the chosen monitoring site (CDAC in Mumbai) to the remote host (at CERN in Geneva) indicates the quality of the end-to-end path in terms of the chosen metric - loss (the legend of colors is given in the top right). Moving the mouse over a site will pop up the name of the site or sites if there are multiple sites in the area. Clicking on the site will give detailed configuration information about the site(s). In the case of Fig. 29 the user has also requested to plot the losses for the last 60 days that is shown in the overlaid window.
ViPER is a Java
application that has been deployed on JNLP (the Java Network Launching
Protocol). Thus the application is launched through a simple mouse click and
the users do not need to install the application. The
application is deployed at http://www.slac.stanford.edu/comp/net/wan-mon/viper/
· Bridging the Digital Divide, R. Les Cottrell and Harvey Newman, American Institute of Physics Forum on International Physics Newsletter.
· Quantifying the Digital Divide: A Scientific Overview of the Connectivity of South Asian and African Countries, A. Rehmatullah, R. L. Cottrell, J. Williams, A. Ali, CHEP06.
· January 2006 Report of the ICFA-SCIC Monitoring Working Group, edited by Les Cottrell for the ICFA SCIC Monitoring Working Group.
· Sub-Saharan Africa is a dark zone for World Internet: Sounding an Alarm, prepared by Les Cottrell, presented by Warren Matthews at the Internet2 Fall 2006 Members Meeting, Chicago Dec 2006
· Sub-Saharan Africa is a dark zone for World Internet: Sounding an Alarm, presented by Les Cottrell at the Sharing Knowledge Across the Mediterranean conference at ICTP Trieste Nov 6-8, 2006.
· Quantifying the Digital Divide from an Internet Point of View, R. Les Cottrell, Aziz Rehmatullah, Jerrod williams and Akbar Mehdi, Presented at the Reuters Digital Vision Program Stanford October 17, 2006.
· Internet Monitoring and Results for the Digital Divide, R. Les Cottrell, Aziz Rehmatullah, Jerrod williams and Akbar Mehdi, presented at the "International ICFA Workshop on Grid Avtivities within Large Scale International Collaborations", Sinaia, Romania October 13-18, 2006.
· Navigating PingER, R. Les Cottrell, presented at ICTP "Optimization Technologies for Low-Bandwidth Networks" workshop
·
Quantifying the Digital Divide from an
Internet Point of View,
R. Les Cottrell, Aziz Rehmatullah, Jerrod williams and Akbar Mehdi, presented
at ICTP "Optimization Technologies for Low-Bandwidth Networks"
workshop
· PingER: An Effort to Quantify the Digital Divide, presented by Aziz Rehmatullah, May 2006.
·
Stanford University, SLAC, NIIT, the
Digital Divide and Bandwidth Challenge,
Presented by Les Cottrell to the NUST/NIIT Faculty,
· Quantifying the Digital Divide: A scientific overview of the connectivity of South Asian and African Countries, presented by Les Cottrell at CHEP06.
There is interest from ICFA, ICTP and others to extend the
monitoring further to countries with no formal HEP programs, but where there
are needs to understand the Internet connectivity performance in order to aid
the development of science.
We should ensure there are >=2 remote sites monitored in each Developing Country. All results should continue to be made available publicly via the web, and publicized to the HEP community and others. Typically HEP leads other sciences in its needs and developing an understanding and solutions. The outreach from HEP to other sciences is to be encouraged. The results should continue to be publicized widely.
We need assistance from ICFA and others to find sites to monitor and contacts in the following countries:
·
Latin America:
·
Belarush (need > 1),
· Africa: Burundi, Cameroon, Mali, Mauritania, Niger, Zimbabwe (all need > 1 site/country), Democratic Republic of the Congo, Libya, Somalia, (have none)
·
Mid East and Central Asia:
Although not a recommendation per se, it would be disingenous to finish without noting the following. SLAC & FNAL are the leaders in the PingER project. The funding for the PingER effort came from the DoE MICS office since 1997, however it terminated at the end of the September 2003, since it was being funded as research and the development is no longer regarded as a research project. To continue the effort at a minimum level (maintain data collection, explain needs, reopen connections, open firewall blocks, find replacement hosts, make limited special analyses and case studies, prepare & make presentations, respond to questions) would probably require central funding at a level of about 50% of a Full Time Equivalent (FTE) person, plus travel. Extending and enhancing the project, fixing known non-critical bugs, improving visualization, automating reports generated by hand today, finding new country site contacts, adding route histories and visualization, automate alarms, updating the web site for better navigation, adding more Developing Country monitoring sites/countries, improve code portability) interestingly is currently being addressed by the MAGGIE-NS project with NIIT in Pakistan funded for one year by the US Department of State and the Pakistani Ministry Of Science and Technology (MOST). However this funding has terminated and NIIT and SLAC are supporting PingER without funding. Without funding, for the operational side, the future of PingER and reports such as this one is unclear, and the level of effort sustained in 2003 and 2004, 2005 and 2006 will not be possible in 2007. Many agencies/organizations have expressed interest (e.g DoE, ESnet, NSF, ICFA, ICTP, IDRC, UNESCO) in this work but none can (or are allowed to) fund it. A recent proposal to the U.S. Department of State was not funded. We will submit a new proposal later this year.
Table 6 lists the 124 countries
currently (January 1st 2006) in the PingER database. The number in the
column to the right of the country name is the number of hosts monitored in
that country. The number cell is colored magenta for if we have monitored no
hosts in that country for the last 2 years, orange where we used to monitor a
host in the country last year but this year no longer monitor any hosts in the
country, and yellow where we currently monitor only one host. Countries marked
in green have been added in the last year. The smallest country we monitor in
terms of population is French Polynesia followed by
Table 6: PingER countries monitored from SLAC with the number of
sites/country. Countries in green were added in 2006. We are unable to find any
monitorable sites any longer in countries with <=0 sites.
We gratefully acknowledge the following: the assistance from NUST/NIIT in improving the PingER toolkit and management has been critical to keeping the project running, with respect to this we particularly acknowledge the support of their leader Arshad Ali; Akbar Khan of NIIT helped in updating some of the graphs and the case study on Africa; Shawn McKee of the University of Michigan kindly provided the sections on LHC-OPN Monitoring and Related Network Research; Mike Jensen provided much useful information on the status of networking in Africa, Alberto Santaro of UERJ provided very useful information on Latin America; Sergio Novaes of UNESP and Julio Ibarra of Florida International University provided useful contacts in Latin America. We received much encouragement from Marco Zennaro and Enrique Canessa of ICTP and from the ICFA/SCIC in particular from Harvey Newman the chairman. We must also not forget the help and support from the administrators of the PingER monitoring sites worldwide
[Advisor] http://dast.nlanr.net/Projects/Advisor/
[
[africa-rtm] Enrique Canessa, "Real time network monitoring in
[bbcp] Andrew Hanushevsky, Artem Trunov, and Les Cottrell, "P2P Data Copy
Program bbcp", CHEP01, Beijing 2002. Available at
http://www.slac.stanford.edu/~abh/CHEP2001/p2p_bbcp.htm
[bbftp] "Bbftp". Available http://doc.in2p3.fr/bbftp/.
[bcr] “Application Demands Outrun Internet Improvements”, P. Sevcik, Business
Communications Review, January 2006.
[bullot] "TCP Stacks Testbed", Hadrien Bullot and
R. Les Cottrell. Avialble at http://www-iepm.slac.stanford.edu/bw/tcp-eval/
[cia-pop-figures] Available at: http://www.cia.gov/cia/publications/factbook/fields/2119.html
[coccetti] "TCP STacks on Production Links",
Fabrizzio Coccetti and R. Les Cottrell. Available at http://www-iepm.slac.stanford.edu/monitoring/bulk/tcpstacks/
[E2Epi] http://e2epi.internet2.edu/
[eassy] The East African Submarine System http://eassy.org/
[ejds-email] Hilda Cerdeira and the eJDS Team, ICTP/TWAS
Donation Programme, "Internet Monitoring of Universities and
[ejds-africa] "Internet Performance to Africa" R. Les Cottrell and
Enrique Canessa, Developing Countries Access to Scientific Knowledge:
Quantifying the Digital Divide, ICTP Trieste, October 2003; also
SLAC-PUB-10188. Available http://www.ejds.org/meeting2003/ictp/papers/Cottrell-Canessa.pdf
[ejds-pinger] "PingER History and Methodology", R. Les Cottrell,
Connie Logg and Jerrod Williams. Developing Countries Access to Scientific
Knowledge: Quantifying the Digital Divide, ICTP Trieste, October 2003; also
SLAC-PUB-10187. Available http://www.ejds.org/meeting2003/ictp/papers/Cottrell-Logg.pdf
[EMA] http://monalisa.cern.ch/EMA/
[floyd] S. Floyd, "HighSpeed TCP for Large Congestion Windows",
Internet draft draft-floyd-tcp-highspeed-01.txt, work in progress, 2002.
Available http://www.icir.org/floyd/hstcp.html
[gridftp] "The GridFTP Protocol Protocol and Software". Available http://www.globus.org/datagrid/gridftp.html
[host-req] "Requirements for WAN Hosts being Monitored", Les Cottrell
and Tom Glanzman. Available at http://www.slac.stanford.edu/comp/net/wan-req.html
[icfa-98] "May 1998 Report of the ICFA NTF Monitoring Working Group".
Available http://www.slac.stanford.edu/xorg/icfa/ntf/
[icfa-mar02] "ICFA/SCIC meeting at CERN in March 2002". Available
http://www.slac.stanford.edu/grp/scs/trip/cottrell-icfa-mar02.html
[icfa-jan03] "January 2003 Report of the ICFA-SCIC Monitoring Working
Group". Available http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-dec02/
[icfa-jan04] "January 2004 Report of the ICFA-SCIC
Monitoring Working Group". Available
http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan04/
[icfa-jan05] "January 2005 Report of the ICFA-SCIC
Monitoring Working Group". Available
http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan05/
[icfa-jan06] "January 2006 Report of the ICFA-SCIC
Monitoring Working Group". Available http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan06/
[iepm] "Internet End-to-end Performance Monitoring - Bandwidth to the
World Project". Available http://www-iepm.slac.stanford.edu/bw
[ictp] Developing
Country Access to On-Line Scientific Publishing: Sustainable Alternatives,
Round Table meeting held at ICTP
[ictp-jensen] Mike Jensen, "Connectivity
Mapping in Africa", presentation at the ICTP Round Table on Developing
Country Access to On-Line Scientific Publishing: Sustainable Alternatives at
ITCP,
[ictp-rec] RECOMMDENDATIONS
OF the Round Table held in Trieste to help bridge the digital divide.
Available http://www.ictp.trieste.it/ejournals/meeting2002/Recommen_Trieste.pdf
[kuzmanovic] "HSTCP-LP: A Protocol for Low-Priority Bulk Data Transfer in
High-Speed High-RTT Networks", Alexander Kuzmanovic, Edward Knightly and
R. Les Cottrell. Available at http://dsd.lbl.gov/DIDC/PFLDnet2004/papers/Kuzmanovic.pdf
[LISA] http://monalisa.cern.ch/monalisa__Interactive_Clients__LISA.html
[low] S. Low, "Duality model of TCP/AQM + Stabilized
Vegas". Available
http://netlab.caltech.edu/FAST/meetings/2002july/fast020702.ppt
[mathis] M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic
Behavior of the TCP Congestion Avoidance Algorithm",Computer
Communication Review, volume 27, number 3, pp. 67-82, July 1997
[MonALISA] http://monalisa.cacr.caltech.edu/
[NDT] http://miranda.ctd.anl.gov:7123/
[NMWG] http://www-didc.lbl.gov/NMWG/
[nua] NUA Internet Surveys, "How
many Online". Available http://www.nua.ie/surveys/how_many_online/
[os] "TCP Tuning Guide". Available
http://www-didc.lbl.gov/TCP-tuning/
[pak-develop-news] News Article entitled “PM launches Seamewe-4 submarine cable”
Available at http://www.jang.com.pk/thenews/jan2006-daily/03-01-2006/main/main5.htm
[pak-fibre] “Fiber Outage in
[pinger] "PingER". Available http://www-iepm.slac.stanford.edu/pinger/; W. Matthews and R. L. Cottrell, "The PingER Project: Active Internet Performance Monitoring for the HEP Community", IEEE Communications Magazine Vol. 38 No. 5 pp 130-136, May 2002.
[perfSONAR] PERFormance Service Oriented Network monitoring Architecture , see http://www.perfsonar.net/
[Pernprop] PC-1 Documents: 1) “Last Mile Pakistan Education and Research Network Connectivity Model PC-1” and 2 ) “Conversion of last mile Pakistan Education and Research Network Connectivity to Fiber Optics Model” PC-1 Available at: http://www.pern.edu.pk/PC-1doc/pc-1doc.html
[pinger-deploy] "PingER Deployment". Available http://www.slac.stanford.edu/comp/net/wan-mon/deploy.html
[PiPES] http://e2epi.internet2.edu/
[qbss] "SLAC QBSS Measurements". Available
http://www-iepm.slac.stanford.edu/monitoring/qbss/measure.html
[ravot] J. P. Martin-Flatin and S. Ravot, "TCP Congestion Control in Fast
Long-Distance Networks", Technical Report CALT-68-2398, California
Institute of Technology, July 2002. Available http://netlab.caltech.edu/FAST/publications/caltech-tr-68-2398.pdf
[tsunami] "Tsunami". Available
http://ncne.nlanr.net/training/techs/2002/0728/presentations/pptfiles/200207-wallace1.ppt
[tutorial] R. L. Cottrell, "Tutorial on Internet Monitoring & PingER at
SLAC". Available http://www.slac.stanford.edu/comp/net/wan-mon/tutorial.html
[udt] Y Gu, R. L Grossman, “UDT: An Application Level Transport Protocol for
Grid Computing”, submitted to the Second International Workshop on Protocols
for Fast Long-Distance Networks.
[un] "United Nations Population Division World Population Prospects Population database". Available http://esa.un.org/unpp/definition.html
h. These countries appear in the Particle Data Group diary and so would appear
to have HEP programs.
[1] Since
North America officially includes
[2] In special cases, there is an option to reduce the network impact to ~ 10bits/s per monitor-remote host pair.
[3] Reducing costs is critical when one considers that 1 yr of Internet access > average annual income of most Africans (Survey by Paul Budde Communnications)
[4] Typically satellite links are 300-1000 more expensive in $/Mbits/s than fibre links.