PAM Meeting Hamilton, NZ, April 3-4, 2000

Author: Les Cottrell. Created: April 3, 2000
XIWT | IEPM | PingER | Tutorial
Page Contents
Introduction Wide Area Fault Detection by Monitoring Aggregated Traffic Assessing Internet performance using SYN based measurements RIPE vs Surveyor Real Time measurement of Long Range Dependence in ATM Networks
Design Principles for Accurate Passive Measurement Network Performance Visualization: Insight through Animation Detection of Change in 1 Way Delay Experiences with NIMI Assessment of Accounting Meters
Internet Performance Monitoring for HENP Measurement of VoIP Traffic FlowBoy The Traffic Matrix Analysis of Internet Delay Times
Statistics and Measurements Performance Inference Engine SingAREN Measurements  Felix Project Discussions

Introduction

The First Passive and Active Monitoring (PAM) workshop was held at the Hamilton Novotel, Waikato, New Zealand on April 3-4, 2000 following the IETF in Adelaide Australia. There were about 70 attendees, 40 from overseas. Probably about 50% of the attendees also attended the IETF. The connectivity was via 2 * 6 Mbps DSL lines from Telcom NZ. There were about 4 PCs and about a dozen laptop hookups. There was a also a WaveLAN wireless setup. 

Most of the talks were on active rather than active measurements. This is partially since ISPs won't let one put sniffers/passive monitors on their network. Active measurements work well at the edges of the network. The two methods are complementary. Most phone measurements have been passive. 

The meeting was a great success, there was a question of when/if to hold another meeting. Combining this one with the IETF was very successful it enabled a contingent of providers to attend.  It was agreed in principal it would be good to have a meeting. The next IETF in a year's time is in Minneapolis.  They asked for volunteers to host the next meeting. 

See http://www.cs.waikato.ac.nz/pam2000/

Design Principles for Accurate Passive Measurement

Measurement is a science and standard hardware & software is not good enough. there are capacity (e.g. how can we measure OC192), confidence (we need confidence in the results and in every stage of the process, need to understand the environment in which the measurements are taken, e.g. does spanning introduce problems, is the hub really a passive device or does it introduce variations, are the time stamps correct, does buffering on the card affect the timing, is the timing synchronized with UTC, are there scheduling effects in the OS that affect the timing) and security issues. They are building hardware NIC cards with onboard processors and FPGAs to capture and filter packets. There are issues of how long to keep the archived information and what to keep, keeping an audit trail of how the data was processed (source code, OS rev levels etc., would like to have a log of how the link configurations/routes etc. change) There are security issues that some of the data may contain commercially competitive information and so the link owners may desire privacy, however anonymizing the addresses can lose a lot of important information.

To extract VoIP information one needs long traces to see the start and end of representative data, to be able to see trends need to keep long archival series. For TCP traces he said we need 10 minute traces. To validate self-similarity etc. one may need traces for several months. 

Simulation requires large volumes of data and examples of rate events (since rare events may dramatically affect the simulation accuracy).

Internet Performance Monitoring for HENP

I presented a talk on the above topic, see: http://www.slac.stanford.edu/grp/scs/net/talk/mon-pam-mar00/ There were several questions: 

Statistics & Measurements from network research to network operations

http://ardnoc82.canet2.net/statsmeasurementsPAM2000/, renehatem@canarie.ca

ardnoc74/

They have implemented a DBMS as a repository for their CA*Net3 data. They have a lot of tools such as Skitter, OC3Mon, and have dBs for cflowd, mrtg, OC3Mon with nice tools for drilling down through the database. The customer is the NOC and want to allow realtime access to the data, with alerts (e.g. see source host addresses beginning with 240 or greater up to 255 to looks for smurf attacks, and send email warning to network operator). The message is that collection is relatively easy but analysis is more complex.

Wide Area Fault Detection by Monitoring Aggregated Flows

Net admins are concerned on how the infrastructure is working and the availability of services within the provider network. Users are concerned about infrastructure, connectivity and availability for services around the world.  To know outside the network they use ICMP messages collected passively using TCPdump. In particular they look at the ICMP unreachable messages. They add information to the ICMP messages providing the AS, topology etc. (use whois, Internet Routing Registry & DNS). By correlating the ICMP unreachable messages they can identify the bottleneck. They find many events (ICMP unreachable messages) are generated by a single failure point. They are developing tools to enable visualization of what is not visible when one sees a broken link. Forecasting network availability is useful to net admins and users, and bottleneck analysis indicate dangerous regions maybe can provide automatic announcements in future.

Network Performance Visualization: Insight through Animation

NLANR is using animation to help visualize large amounts of data that are needed to characterize network performance. One of the tools they have developed is called Cichlid that allows viewing datasets in 3D as if they were physical objects. Main author is jabrown@nlanr.net. Cichlid is free, is in C and uses OpenGL running on FreeBSD, SGI, Linux and MS windows.  It is a client/server architecture. Should only need to write a small piece (~200 lines of C) to interface a new set of data. It produces 2 basic classes of visualization: bar charts which can be stacked; vertex/edge graphs. Tony McGregor demonstrated several examples of animated bar charts of site vs. time vs. RTT. The demo was on a 250MHz cpu, 9GB hard disk, 

They are working on an improved user interface (menu driven user interface, instead of command line), save & restore status, new models (beyond bar chart and edge/vertices), Postscript output, internal design  to use threads (thinking about this but not committed to yet).

http://moat.nlanr.net/Software/Cichlid/

Measurement of Voice Over IP Traffic

jpc2@cs.waikato.ac.nz

Problems that may come with VoIP include the real-time nature conflicts with TCP algorithms which may introduce extra jitter and pauses, so use UDP for the data path, but UDP has no congestion control, no back-off so could be very bad for TCP on congested networks. The aim is to provide a model for simulation of VoIP traffic. then can investigate the interaction between TCP & UDP. The method is to do passive measurement traces and from these measurements provide data for simulation. 

The data comes from Measurement and Operations Analysis Team ( http://moat.nlanr.net/) which has 15 OC3 & OC12 monitors mainly in the US. A 2nd source is a the NZ Internet Exchange which has a 100Mbps Ethernet switch and has 2x10 minute traces daily from Dec 98 to April 99. The data is captured via spanning the switch (believe there is low loss, but there will be some collisions), it is a software based measurement (Linux system) which limits timestamp accuracy.

They look for H.323 (standard for VoIP) which allows inter-operation between different applications. Hunt for VoIP traffic by looking for specific TCP control ports (1503, 1720 used to establish connections), then look for dynamic RTP data & control points (uses even port for data and eve+1 for control). It also often has fixed sized data packets. Pe4rcentage of traffic is low, e.g. 22 conversations from 211 traces on NZIX & approximately 1 MB from 7GB traces from NLANR data. The trace duration needs to be as long as possible (conversations often exceed length of the traces), it may be possible to mode to real-time data collection on a 24x7 basis. 

Bytes/second is about 25Kbytes/sec but with large excursions. Control information is a lot lower. Another case was 1700bytes/sec. 

For the simulations want to vary TCP & UDP from 0-120% of link capacity. the TCP data formed from TCP NZIX trace logs. Data offset & over-layed if it is required to generate extra load. 

The results show that UDP traffic badly affects the TCP performance if the link is saturated (i.e. the UDP does not back off and TCP is badly affected - the available bandwidth for TCP drops off rapidly as UDP traffic increases).

Performance Inference Engine (PIE) - Deducing More Performance using Less Data

 Susmit Patel from Mitre. This was a description of a mathematical technique to deduce performance measures from transactional data. 

Assessing Internet Performance Metrics Using Large-scale TCP SYN-based Measurements

Martin Horneffer, University of Cologne. 

The motivation is help the user understand what performance they might get, how to choose between ISPs, to be able to measure the performance of various Internet access points, and to help ISPs in selling services to customer. First step is to compose a list of Internet hosts by observing (e.g. Netflow, TCPdump, NetraMet) traffic generated and seeing what hosts are actually used. Thus the actual hosts probed varies from hour to hour (the granularity of the composing the list of hosts from NetFlow), thus he does not have historical information on any single pair. He showed a histogram that shows that for Koln, 100 hosts cover > 50% of traffic and 1000 cover 66% of traffic. To get all traffic takes 100,000 of hosts. He uses a list of about 1000 hosts.  Having got the list, he then measures the network layer using metrics from IPPM. Ping has problems (e.g. may be treated differently from other traffic), so instead use UDP-echo and TCP SYN so routers/hosts can't tell the difference. 

Use TCP SYN mechanism to measure loss & RTT and can close connection with a simple RST. Need to watch out for loss of syn-ack coming back since will add a timeout (typically, but not necessarily 3 seconds) before re-transmit.  Loss of third packet may cause a re-transmit of syn-ack. Can use heuristics to remove the re-transmits (e.g. by looking at RTT frequency distribution and looking for a 2nd spike at high (say around 3 second) delays).

There have been complaints about TCP SYN "attacks". People compain via sending to well-known email addresses (root, postmaster, abuse, webmaster), or look up DNS-SOA, or lookup in the RIPE-DB, or complain to rector of university. Martin explains to the user the nature of what he is doing after hearing form user. Some users ask to stop monitoring. He tried putting information into the packet, but most people did not look at. He set up a web page and a TXT entry in DNS (neither of which helped), set up a dedicated IP address with a descriptive name (measurement-for-thesis.rrz.uni-koeln.de), put a virtual web server on the measurement host, the combination of the last pair is reasonably effective. UDP-echo is worse in terms of complaints than TCP-syn.  In over a year he has received about 25 complaints, of these 4 were  related to synack, the others were UDP-echo. Since they dynamically select the remote sites based on passively measured tcp flow information, it is not realistic to notify of the order of 1000 sites in advance that they may see a lot of syn/acks.

He plots the predicted TCP thruput (based on formula from Padhye et. al see Proc. SIGCOMM. ACM , Sept 1998) cumulative frequency (cumulative across multiple connection pairs) for various providers.  

His method is unbiased & objective since the list of hosts is obtained by automatic scripts, it scales to many hosts, it requires low bandwidth, it has good accuracy, it provides clear results for application performance. It requires  resources - user base, router (netwflow), a host machine, a dedicated address, database entries (DNS RIPE), permanent maintenance - DONT TRY THIS AT HOME.

Detection of Change in One-way Delay for Analyzing the Path Status

This had to do with understanding and measuring clock skew between sites.

FlowBoy and object oriented framework for generic network flow management - Mike Haberman NCSA

There is an extensive list of flow management tools. However, they don't interwork, new software has to be introduced when flows are exported by a vendor in a different format or there is a new flow type, and the learn, install, configure cycle can be quite expensive. So this applies OO technology to flow management.

Measuring IP Network Performance - The SingAREN Approach

Talk was given by Heng Seng Cheng.

SingAREN is connected via 14Mbps to STARTAP (& thus vBNS, Abilene, CA*Net2) also connected to Japan, Taiwan, Korea, Malaysia and of course provides connectivity for Universities etc. in Singapore.  

The tools need to notify in event of link/node failure or performance degradation and to provide netywork performance from an end users perspective. Looked for available tools.

They use Surveyor, Skitter, ICMP with MRTG, and Speed Meter (from Hungarian MirrorNet company).

RTT to LA has a minimum of around 200msec, 72 msec to Korea/APAN. 

Speed Meter goal is to help web site owners determine how their site performs in different parts of the world. See http://www.tracert.com/. They install agents in several locations in the world. Agent downloads a web page and measures different parameters of the page and reports to the central server. Measurement results reflect end-to-end performance seen by users, and results are sent to subscribers by email.

Satellite links are useful to connect SingAREN to neighboring countries such as Vietnam, Phillipines, Thailand, Burma where it will be many years before fiber is installed. BER is typically 10^-6 (i.e. much higher than fiber) and long RTT (500msec.) and so the link is likely to dominate the overall performance of the IP measurements. The ITU has specified some QoS performance objectives for ATM over satellite, e.g. cell loss better than 7.5*10^-6.

Comparing two implementations of the IETF IPPM One-way delay & Loss Metrics

Henk Uijterwaal, Matt Zekauskas, Sunil Kalidindi.

See RFCs 2330, 2679, 2680 for description of metrics for one-way delay and packet losses. IETF requires two independent implementations that will give the same results. The projects are Surveyor and RIPE. No direct comparisons is possible so have to look at distributions.

Both use GPS with UDP packets. Surveyor uses 40 bytes with 2/sec, RIPE is 100bytes and 3/minute. Both use Poisson scheduling. Both also do traceroutes, measurements are centrally managed, results are available on the web. Surveyor uses BSDI Unix, RIPE uses FreeBSD use different GPS devices (Surveyor uses True Time PC[I]-SG "bus-level" card which is expensive ~$2K). There rate71 Surveyors, 2100 paths; RIPE has 43 machines (plan to double) mainly at RIPE members i.e. European ISPs. CERN, SLAC, RIPE NCC & Advanced  N&S) have both RIPE & Surveyor machines. Unix time is typically only accurate to 10msec, BSD has a sub clock at 1.2MHz which gives usec accuracy but clocks on machines run independently, so use GPS to provide synchronization and get internal clocks synchronized to a few usec.

He showed a plot of delays of RIPE to Advanced and they track one another closely, then showed the percentiles and the distributions for 2.5%, median, and 97.5% look similar. He showed our plots of the statistics and then showed the drift required. Henk looked at understanding the drifts. He varied the packet length from 40, 200, 500, 1500 byte packets, and plotted delay vs. length and gets a linear variation.  a=8.09+-0.10)10^-4byte/ns, TA (8.47+-)* 10^-3. He can explain 0.14msec by packet difference. Further investigation needed for remaining 0.06msec. They will look at the effects of different sampling frequencies. 

A possibility would be to look at two receivers of the same data this may show some instrumental differences.

Talk will be available on Monday at http://www.ripe.net/test-traffic 

Experiences with NIMI

Vern Paxson, Andrew Adams & Matt Mathis

NIMI is a command & control infrastructure for Internet measurements, ISPs and others volunteer resources and users (researchers) run experiments on the shared resources (requires delegation). 

Configuration is done by the CPOC which controls a set of probes.  The CPOCS have Measurement Control (MC) and Data Acquisition Control (DAC). The key is flexible policy control with heterogeneous admin with ACLs and delegation. there was an aearly focus on security design. There are 35 nimids with 2 CPOCS, 8 user/studies (e.g. Mark Allman using for capacity measurements). 

Lessons learned include remote error handling, lack of measurement grouping, key dsitribution is hard, need fine grained state persistence, data integrity under unexpected circumstances ... Heterogeneity lesson s: tools required privileges, modification to system config, secure admin access ... Large distributed SW lessons: clocks, DNS flakiness, subtle APIs, exhausting resources - made harder by 4 different OSs.

Functional extensibility: experience has shown that adding "user" code is crucial. Basic tension between extensibilty and avoiding unexpected behaviors boils down to resource management and security. Resources include: cpu, memory, disk space, I/O activity, network activity, packet filters (channels), TCP & UDP ports, response ID (name) space (e.g. ports within ICMP errors in particular ones that run at high rate with ICMP errors). Threat models: subverting the platform provides a stepping stone well connected for DoS (Denial of Service), sniffing, altering measurement software. Attacks from within NIMI: measurement tools deliver attack packets, collect private information. Perturb other people's measurements: forge measurement packets, excessive resource consumption.

Trust models: grant priviledged access to NIMI  admins, local admin perfoms priv. actions, widespread Internet reliance on trustworthiness by eminent authority. NIMI extensibility requires an iron-clad trust model otherwisre all providers need to trust all users, perhaps an open question.

New tools are installed out of band not using NIMI (scp, tar etc.). They intend to deploy tools by allowing nimid to get the tool from MC, then hope to go to CPOC>MC>nimid which should be more scalable.

Protection from corrupt tools with code validation (scan imported code, halting hard), file system sandboxing (the entire nimi probe, chroot to a private nimi tree, chroot to each measurement tree). Network sandboxing has a granularity problems: possible approaches: kernel mods, linking against trusted library, enforcement daemon, "safe" languages. Proxying all network requests gives sender latency problems, need receiver timestamp in driver (but latency problems for some metrics), can be subverted but solves packet filter woes. Tracking al resource usage can help but find out later. 

Safe languages examples are Java, Python, Perl taint. they have restricted programming semantics, restricted semantics for packet I/O, have complete control over resources. It is hard but a strong solution, but is it really needed?

Looking at safe language with runtime control on all resources, restricted network I/O. 

The Traffic Matrix

Martin van den Nieuwelaar

AUCS is a tier 1 transit provider in the Netherlands that is looking to expand globally. Recently merged with Infonet. They have about 60 customers in 16 countries in Europe with link speeds of 155Mbps, and 3 *OC3s to New York. 

Up until late last year monitoring was done by SNMP queries. Looked at other tools and came up with NetFlow exports from Cisco routers. Export is done by UDP so may not get through, there is a counter tellilng whether they get through but can't guarantee. NetFlow exports allows one to tell what portion of traffic coming in one interface goes out another interface. He has compared with NetFlow with SNMP stats and got agreement to 5-10%. Has noticed that some flows last longer than the Cisco software reports. Products to analyze the stats include a Cisco product and CFlowd. CFlowd is free and is very good at integrating statistics.

They have about half the POPs are instrumented with collectors. They are collecting about 8Gbytes/day from each router and have about 150 routers. So they do a lot of aggregation (e.g. into 5 minute intervals).

A problem is how ifindex numbers are assigned depending on whether board is added in hot mode or maintenance mode. It makes it hard to track the interfaces. They provide reports on number of flows seen per hour for each link. Running NetFlow adds about a 15% load on the routers.

Felix Project: Topology Discovery from One-way Dependence in ATM Networks

Bruce Siegell  from Telcordia gave the talk.

Research and prototyping of novel methods for monitoring network health, based on analysis of loss and RTT from sparse mode monitoring.

Felix monitors make one-way delay measurements. Rely on clocks on hosts so must deal with clock synchronization. They model the network by assuming all delays can be associated with links (bundle router delays into links). They assume links are unidirectional. Delays is sum of delays on links in the path. Path dlay equal topology matrix time link delays plus an error term. Discoverable topology is only for links traversed. Goal is to produce a map of the network. they create a matrix of what paths share something in common, i.e. paths vs edges. Look at delay peaks and see how the correlate across multiple pairs to see if they share a component (edge). Also minimize path lengths (number of edges in a path) with constraints that can't move the source and destinations.

They investigated clock drifts. They look at the minimum delays shift lower envelopes so centered around zero, then do for the reverse path, then flip one of the graphs (since drift extends (or contracts) the delay in different directions for the 2 two paths) and they lie on top of one another, so can then straighten by adjusting clock drift.

Real-Time measurement of Long-Range Dependence in ATM Networks

Matthew Roughan, Darryl Veitch, Jennifer Yates, Martin Ahsberg, Hans Elgelid, Maurice Castor, Mick Dwyer & Patrice Abry - University of Melbourne. 

Want to do real time measurement to provide immediate access, reduce memory needs. Recognize traffic is fractal, i.e. there are invariance relationships between scales (self-similarity), lack of a characteristic time scale, radical burstiness, strong dependence on past (no exponential drop off), difficult statistics; properties. Implications for networks include lower utilization at a given QoS, buffer insensitivity, larger buffers don't save us.

Assessment of Accounting Meters with Dynamic Traffic Generation Based on Classification Rules

Georg Carle, Jens Tieman and Tanja Zseby - GMD Focus

The motivation is to do fair charging which requires inexpensive metering of used resources. The requirements depend on QoS provisioning techniques, charging scheme, expected traffic characteristics. Need a flexible testbed.  There are 4 major accounting meters: NetraMet (free), IPAC Linix IP accounting,  NetFlow (from Cisco only), and NARUS probe.

Analysis of Internet Delay Times

Stele Martin <hmartin@cs.waikato.ac.nz>, Anthony MacGregor & John D. Cleary, U of Waikato

Want to break out web page delay into various components. need a model to describe the delays, want to use passive measurements.  Delay consists of physical network delay, the network/queuing delay, client/server processing delay. Later want to look at loss.

Estimate the wire time from the measurements for the distribution of time. The network queuing delay is the load dependent and take it as being the difference in the minimum and the man of the distribution. The difference in the mean of the data time and the mean of the synack time gives the server time. Total delay = 2 * physical + 2 * net/queuing + server.

Total delay = 2*(client phys + server phys) + 2*(client net + server net) + server

Then plot 3 components on triangle with 3 coordinates (% server, % net, % physical) and the clustering shows where the delay components come from.

Future work: make more use of different  RTT classes; analysis of persistent HTTP, non HTTP; packet loss & timeouts; statistical confidence/robustness; processing of partial sessions, validation on ideal environment.

On-line Generation of Fractal and Multi-Fractal Traffic

Darryl Veitch, Jan-Anders Backar, Jens Wall, Jennifer Yates, Matthew Roughan.

Discussions

I talked to David Moore of CAIDA/SDSC and Matt Zerkauskas of ANS/Surveyor. Matt is optimisitic about getting a Surveyor installed at SDSC in the near future. 

I talked to Daniel Karrenberg of RIPE concerning automated use of  the NIKHEF traceroute with the RIPE AS database. He encouraged such use, and expect it to be in the noise at the level I proposed (about 70 hosts each tracerouted once an hour). 

I discussed with Matt Kerkauskas what should be the content of out presentation to the Internet 2 folks in May. He said they want to have an overview of the active measurement programs, something on long term trends, IPv6 (especially if we have some new data). 

Andrew Adams (of PSC) and I looked at how we would get started on putting PingER into NIMI. We installed software, set up and propagated keys, started up the new data analysis client, and reviewed how to run the measurement client.

We were asked if SLAC would be willing to host the next PAM meeting in 2001.


[ Feedback ]