Comparison of some Internet Active
End-to-end Performance Measurement projects
Author:
Les Cottrell, with help from
Matt Zekauskas (Surveyor), Henk Uijterwaal (RIPE), and Tony McGregor (AMP).
Created: July 9; last updated on
July 13, 1999
Introduction
There are several projects that are making Active (i.e. injecting probes)
Internet End-to-end Performance Measurements (AIEPM). This document will
compare projects that are in the public domain, and that make their reports
available via the web.
In addition to the projects reported in this document, there are
commercial enterprises who make selected
portions of their information available publicy. These include
AndoverNews Network's
Internet Traffic Report,
Inverse Network Technology,
Keynote
and
Matrix Information &
Directory Services.
Other enterprises make measurements for a particular
community, but do not provide public access
to them, examples are the Automobile Network
eXchange (ANX) and the Cross Industry
Working Team (XIWT). For more on network monitoring projects etc.
(including passive monitoring) see
Interesting web sites for Internet Monitoring.
Project Comparisons
There are 5 major public domain AIEPMs that we will consider:
- AMP:
the
National Laboratory for Applied Network Research (NLANR)
Active Measurement Program (AMP) for High Performance Computing
(HPC)
awardees is intended to improve the understanding of how high
performance networks perform as seen by participating sites and
users, and to help in problem diagnosis for both the network's users
and its providers. The community of interest is the
National Science Foundation (NSF)
HPC program awardees.
- NIMI:
is a project funded by the NSF to measure
the global internet.
NIMI is scalable in that
NIMI probes can be delegated to administration managers for
configuration information and measurement coordination.
It is dynamic in that the measurement tools are external to
nimid as third party packages that can be added as needed.
- PingER: the
DOE /
MICS
Internet
End-to-end Performance Meaurement (IEPM) project to provide
active monitoring of end-to-end performance of
Internet links. The community of interest is
ESnet,
High Energy & Nuclear Physics (HENP) and the
Cross Industry Working Team (XIWT).
- RIPE's:
project goal is to do independent measurements of connectivity
parameters, such as delays and routing-vectors, in the Internet. It's
community of interest is European Internet Servive Providers (ISPs)
and their users.
- Skitter:
is primarily intended to be used to measure forward IP paths (each "hop")
from a source to many destinations. It is supported out of
Cooperative Association for Internet Data Analysis (CAIDA).
The community of interest and much of the funding comes from
the Defense Advanced Research
Project Agency (DARPA), and the NSF. See
PingER FAQ
for more of a comparison of PingER and Skitter.
- Skping
on the other hand appears to be used as a high resolution tool (i.e.
frequent measurements between 2 hosts) to understand/trouble shoot
end-to-end performance. I do not see any long term archive of measurements
that can be accessed for further analysis.
Skping does have some very nice canned visualization tools and has paid
a lot of attention to the timing. This means that the measurement engines
are probably uniform (or at least all running the same OS with the
appropriate modifications) and possibly are centrally managed.
- Surveyor:
uses active tests of one-way delay and packet loss along paths
between measurement machines at
CSG sites, and
some associated sites. The community of interest is the
research and education missions of US higher education.
Though NIMI does make measurements, its main goal is to provide
an infrastructure for making measurements, at the moment no results
are available publicly, and so it will not be pursued
further in this document.
The remainder can be divided up by whether they make one way measurements
or round trip (two-way) measurements.
Surveyor and RIPE make one-way delay measurements and require a Global
Positioning System (GPS) to provide clock synchronization between sites.
AMP & PingER make two-way measurements using
the
Internet Control Message
Protocol (ICMP) ping facility today,
and do not require a GPS. Skitter makes round trip measurements, but is more
macroscopic (global) in purpose measuring 35,000 sites hourly (or a subset thereof).
All except PingER require that a dedicated PC running Unix to be placed at each
monitoring site. PingER makes use of
carefully selected existing hosts and
software only
needs to be installed at the monitor hosts. Both AMP and PingER can monitor
remote sites without any prequisites for installing any hardware or software at
the remote site.
In all cases the monitors send packets at intervals
to remote hosts and use these packets to gather
delay and loss measurements. The monitor sites do
full-mesh pinging (i.e. each monitor site monitors each other monitor site).
Skitter and PingER, also monitor many non-monitor sites.
Most of the monitors (Skitter, in particular is very oriented to
making route measurements)
also make concurrent
traceroute measurements which provide route history information.
Surveyor, AMP and PingER provide web access to enable the public to
select and search through the information available. RIPE provides access
to the information only for monitor sites. As far as I can see, Skitter only
provides examples of their reports.
Metric | Surveyor |
RIPE |
PingER |
AMP |
Skitter |
Method | 1 way delay & loss | 1 way delay & loss |
2 way ping | 2 way ping |
traceroute like |
Hosts | Dedicated |
Dedicated |
"selected" | Dedicated |
Dedicated |
Time synchronization | GPS | GPS | NTP | NTP |
NTP |
Frequency (load average) | ~2*2/s (~2kbps) | ~3/min (0.330kbps) |
~0.01/s (~0.1kbps) | ~ 1/minute |
Hourly |
Scheduling | Poisson <2/s> | Poisson <1/min> |
bursty (30 min) |
Linear random about 1st 15 seconds of min. |
~30 min. |
Packet size | ~ 40Bytes | 100Bytes |
100Bytes & 1000Bytes |
64Bytes | 52Bytes |
Locations | US, CA, CH, NL & NZ |
EU, IL, US |
10 monitoring site countries, 22 remote site countries |
US, NZ, NO |
Monitors in Asia, CA, UK, US |
Monitors | ~51 (Jul-99) | ~32 (Jul-99) |
18 (Jul-99) | ~70 (Jul-99) |
20 |
Pairs | ~1000 | 1024 | ~1200 | ~4600 |
35000 |
Data start | 1997 | 1998 | 1995 | 1999 |
1998 |
Data availability | Upon request |
Upon request |
Public access via Web |
Public access via Web |
? |
Data storage |
~38MB/pair/mo | 2Mbytes/pair/mo |
~0.6MB/pair/mo | ~1.3MB/pair/mo (0.5MB zipped) |
? |
Sponsors/Community |
CSG /
Advanced |
RIPE /
European R&E sites |
DOE /
ESnet /
HENP /
XIWT |
NSF /
NLANR /
Internet 2 |
DARPA / NSF /
CAIDA |
In the table above the load average referred to in the Frequency row is the
number of bytes sent & received in the active probe packets over an
hour and expressed as bits/second. This does not provide the instantaneous
load which may be much greater, or the bandwidth used to collect the data from
the monitoring hosts.
Conclusions
Skitter is aimed more at global Internet measurements and so tends to
be more generic than the others and in some ways the most unique among the five.
Thus we mainly compare the first four
(i.e. exclude Skitter). As can be seen, of these four,
Surveyor makes the most frequent measurements and gathers the most data.
This makes it particularly useful for comparing and validating
more lightweight measurements, or for looking at the Internet with
fine granularity in time.
PingER makes the least frequent measurements and hence
has the least
network impact, which may be important for paths that have limited bandwidth (e.g.
to S. America, Africa, Russia or China). PingER also has historical data going
back for the longest time period and (apart from Skitter)
has the widest distribution
(different countries, continents, ISPs etc.) of
remote sites. AMP measures the largest number of host-pairs.
Currently PingER probably provides the most reports with long-term information,
though both AMP & Surveyor are starting to provide more in this area. Given the
amount of data Surveyor collects it is understandable why they currently
make the raw data
only available on demand.
These five projects should be regarded as complementary since they
have different
goals, different communities of interest and they monitor different paths.
At the same time there is active collaboration between the projects.
Since the
projects often have paths that overlap, i.e. there are AMP, Surveyor and RIPE
hosts installed concurrently at several sites that are are also PingER sites (e.g.
SLAC, CERN), comparisons and correlations of the data is possible and
encouraged (see for example Comparison
of PingER and Surveyor and
Comparison of Surveyor and RIPE). Such comparisons help in ensuring the data is
correct (the projects use different code and mechanisms) and also help identify
the applicability of the data from each project.
Conclusions
You are visitor number
since July 13, 1999.
[ Feedback ]