Comparison of some Internet Active End-to-end Performance Measurement projects

Author: Les Cottrell, with help from Matt Zekauskas (Surveyor), Henk Uijterwaal (RIPE), and Tony McGregor (AMP).
Created: July 9; last updated on July 13, 1999

IEPM | Tutorial | PingER Help | PingER Tools | PingER Summary Reports| PingER Detail Reports

Page Contents

  • Introduction
  • Project Comparisons
  • Conclusions

  • Introduction

    There are several projects that are making Active (i.e. injecting probes) Internet End-to-end Performance Measurements (AIEPM). This document will compare projects that are in the public domain, and that make their reports available via the web. In addition to the projects reported in this document, there are commercial enterprises who make selected portions of their information available publicy. These include AndoverNews Network's Internet Traffic Report, Inverse Network Technology, Keynote and Matrix Information & Directory Services. Other enterprises make measurements for a particular community, but do not provide public access to them, examples are the Automobile Network eXchange (ANX) and the Cross Industry Working Team (XIWT). For more on network monitoring projects etc. (including passive monitoring) see Interesting web sites for Internet Monitoring.

    Project Comparisons

    There are 5 major public domain AIEPMs that we will consider: Though NIMI does make measurements, its main goal is to provide an infrastructure for making measurements, at the moment no results are available publicly, and so it will not be pursued further in this document.

    The remainder can be divided up by whether they make one way measurements or round trip (two-way) measurements. Surveyor and RIPE make one-way delay measurements and require a Global Positioning System (GPS) to provide clock synchronization between sites. AMP & PingER make two-way measurements using the Internet Control Message Protocol (ICMP) ping facility today, and do not require a GPS. Skitter makes round trip measurements, but is more macroscopic (global) in purpose measuring 35,000 sites hourly (or a subset thereof).

    All except PingER require that a dedicated PC running Unix to be placed at each monitoring site. PingER makes use of carefully selected existing hosts and software only needs to be installed at the monitor hosts. Both AMP and PingER can monitor remote sites without any prequisites for installing any hardware or software at the remote site. In all cases the monitors send packets at intervals to remote hosts and use these packets to gather delay and loss measurements. The monitor sites do full-mesh pinging (i.e. each monitor site monitors each other monitor site). Skitter and PingER, also monitor many non-monitor sites. Most of the monitors (Skitter, in particular is very oriented to making route measurements) also make concurrent traceroute measurements which provide route history information. Surveyor, AMP and PingER provide web access to enable the public to select and search through the information available. RIPE provides access to the information only for monitor sites. As far as I can see, Skitter only provides examples of their reports.

    MetricSurveyor RIPE PingER AMP Skitter
    Method1 way delay & loss1 way delay & loss 2 way ping2 way ping traceroute like
    HostsDedicated Dedicated "selected"Dedicated Dedicated
    Time synchronizationGPSGPSNTPNTP NTP
    Frequency (load average)~2*2/s (~2kbps)~3/min (0.330kbps) ~0.01/s (~0.1kbps)~ 1/minute Hourly
    SchedulingPoisson <2/s>Poisson <1/min> bursty (30 min) Linear random about 1st 15 seconds of min. ~30 min.
    Packet size~ 40Bytes100Bytes 100Bytes & 1000Bytes 64Bytes52Bytes
    LocationsUS, CA, CH, NL & NZ EU, IL, US 10 monitoring site countries, 22 remote site countries US, NZ, NO Monitors in Asia, CA, UK, US
    Monitors~51 (Jul-99)~32 (Jul-99) 18 (Jul-99)~70 (Jul-99) 20
    Pairs~10001024~1200~4600 35000
    Data start1997199819951999 1998
    Data availabilityUpon request Upon request Public access via Web Public access via Web ?
    Data storage ~38MB/pair/mo2Mbytes/pair/mo ~0.6MB/pair/mo~1.3MB/pair/mo (0.5MB zipped) ?
    Sponsors/Community CSG / Advanced RIPE / European R&E sites DOE / ESnet / HENP / XIWT NSF / NLANR / Internet 2 DARPA / NSF / CAIDA
    In the table above the load average referred to in the Frequency row is the number of bytes sent & received in the active probe packets over an hour and expressed as bits/second. This does not provide the instantaneous load which may be much greater, or the bandwidth used to collect the data from the monitoring hosts.

    Conclusions

    Skitter is aimed more at global Internet measurements and so tends to be more generic than the others and in some ways the most unique among the five. Thus we mainly compare the first four (i.e. exclude Skitter). As can be seen, of these four, Surveyor makes the most frequent measurements and gathers the most data. This makes it particularly useful for comparing and validating more lightweight measurements, or for looking at the Internet with fine granularity in time. PingER makes the least frequent measurements and hence has the least network impact, which may be important for paths that have limited bandwidth (e.g. to S. America, Africa, Russia or China). PingER also has historical data going back for the longest time period and (apart from Skitter) has the widest distribution (different countries, continents, ISPs etc.) of remote sites. AMP measures the largest number of host-pairs. Currently PingER probably provides the most reports with long-term information, though both AMP & Surveyor are starting to provide more in this area. Given the amount of data Surveyor collects it is understandable why they currently make the raw data only available on demand.

    These five projects should be regarded as complementary since they have different goals, different communities of interest and they monitor different paths. At the same time there is active collaboration between the projects. Since the projects often have paths that overlap, i.e. there are AMP, Surveyor and RIPE hosts installed concurrently at several sites that are are also PingER sites (e.g. SLAC, CERN), comparisons and correlations of the data is possible and encouraged (see for example Comparison of PingER and Surveyor and Comparison of Surveyor and RIPE). Such comparisons help in ensuring the data is correct (the projects use different code and mechanisms) and also help identify the applicability of the data from each project.

    Conclusions


    You are visitor number Page counter since July 13, 1999.

    [ Feedback ]