XIWT/IPEX Working Session Meeting, Aug 2000

Authors: Les Cottrell. Created: August 22, 2000

XIWT | IEPM | PingER | Tutorial

Page Contents

Introduction Understanding MINC
IPEX and DARPA Discussions 1 Measurement techniques for content distribution
SSF Highlights of day 1 IPEX demo
What to measure Network data warehouse Discussions 2

Data Grids

Introduction - Fred True, AT&T

This meeting was held at the ATT Shannon Laboratory Florham Park, New Jersey on 22-23 August 2000. There were 25 representatives from Verio/NTT (Randy Bush), NIST, CNRI (3), Compaq, Telcordia, HP (Sharad Singhai), WestGroup (Dave & Jeff), Ericsson Berkeley Lab (Axel) Agilent (Rick), Kaiser Permanente (2), ATT  (Fred True + another), Saarbrueken (Anya ex. ATT). The XIWT was founded to bring together the computer companies and the telecommunications companies.

The issues to be addressed by the IPEX working session include:

Other thoughts: 

How do we know we have succeeded: providing data of use to DARPA folks. DARPA  also wants passive data. Needs to have a long lifetime, and be low maintenance. Need to define goals, define success and failure. How does it differ from other measurements. Compare results from different sources. XIWT does have a baseline of the PingER measurements, but the idea of the current project is not just an extension of PingER to other sites. 

K Claffy has provided an interesting web site comparing the various measurement infrastructure activities see http://www.caida.org/analysis/performance/measinfra/

Some of the requirements for modeling/simulation is to extend the simulators to multi-hops. For example ns typically characterizes a path by the bottleneck.

IPEX & DARPA - Chuck Brownstein

Started out with PingER with idea of using results for SLAs. Now want to put together a set of industry players to set up measurements with an experimental platform and set up an experimental infrastructure. Moving towards providing information to the simulation/measurement community. DARPA funding at a marginal rate to see if it can do something valuable, then hope there will be more funding. Need to define an interesting topology then get commitments from people to provide sites. Also find/define a interesting/useful platform which will have a lifetime of 3-5 years. One advantage XIWT has is its ability to bring together multiple industrial partners, and it has bye laws that protect/indemnify people who participate in such projects, yet looks after ownership, and bringing in others in from outside to participate in.

Scalable model and simulation framework (SSF) of Internet infrastructure architecture - Andrew Ogielski, Renesys Corporation 

SSF is used for simulation and analysis of very large networks, they have tools. They are proposing to develop a next generation with scalable, extensible open source. They have simulation probes to measure results of simulation. They are funded by DARPA.

They distribute TCP, UDP, BGP4, OSPF protocols. They have validation tests for the running protocol behavior, see for example www.ssfnet.org They manage random numbers so can get reproducible results. They use the CERN statistics package. Big problem is with tens of thousands of hosts is how to extract the data from the many thousands of steams of data. The tools are open and are written in pure Java.

They want real world measurements to validate their models. They want to correlate multipoint measurements of flows on a really wide scale. For example they want to see if congestion stores really exist, how do they perform and do they account for long term correlations.

What to Measure - Fred True, ATT

Active measurements are relatively easy, there are many existing tools & platforms, can leverage existing work (PingER, Surveyor, NIMI) and existing definitions. Do we need ping UDP/TCP probes and multicast. It provides a stable, long dataset for analysis of baseline characteristics.

Passive measurements has LOTs of interest, modeling/simulation projects need real data, network operators and researchers want to know general traffic properties. Can we surmount the obstacles of privacy and security. Bear in mind that if one gives complete traces then clear text passwords are visible. 

The amount of anonymization will  depend on who gets the data, is it to be made public etc. What the data contains depends on the goals of the measurements. Issues include how to get accurate timestamps for packets. What is the minimal data that is useful to gather? Want sites to be topologically diverse. What are the questions of interest that will motivate people to put into a place an infrastructure to answer those questions?

Understanding Internet Performance from User Perspective - Raghu Kacker

One objective  from DARPA is the monitoring and anomaly detection research predictive models for RTT & investigate application to anomaly detection. PingER data collected at CNRI from 5/1/98 thru 4/30/2000 integrate with Stat server and make available on the web, plan to make current to yesterday or day before. Stat server provides data analysis by SPlus interfaced via Web. Service used to test & validate methods (algorithms) to establish baselines, detect anomalies & ID trends. Proof of context http://statserver.statsci.com/statserver/demos, plan to expand menu, recommendations welcome, plan to include more datasets, other probes, passive, broad access, need standard RDBMS (e.g. Oracle). Provides autocorrelation, selection. DARPA want data to be available to all community. 

Discussions 1

Andy Ogielski recommended that what is needed by the simulators include: 

The needs of the simulator people appear to be directed to passive measurements both internal to the network and at the edges, so next we outlined the requirements of what needs to be measured with the idea of what are the low hanging fruits (i.e. easy to gather and  most important):

To gather such information where do we put these monitors? This includes what kind of connections do we need such as OCx, GE, FDDI, and ATM for ISPs. Also need to passively monitor with active measurements, for example passively measuring an active application (AFS, Objectivity). Asymmetry is a problem and has to be accounted for. For some cases you can simply analyze/aggregate the data locally, for others need to be able to export the raw data with time stamps.

Next we have to ask what is the sizing of the monitor, how many MHz, disk, interface speed. This depends on what we measure and how much information needs to be kept and distributed.

Code updates has a security aspect, and may need a separate control path to the tapping/sniffing.

There are issues of how much is collected and when, one cannot collect everything all the time on all links. This introduces the need for sampling.

A next step is how does one get people to participate, e.g. why does a site want this, what does a site get out of it,  and how do we allay fears with security. As an example BGP convergence is about to become a major issue to ISPs since there is some new data from Greg. as a result one may come up with better traffic engineering. 

Want to draft a document outlining what is to be measured, this will lead to the next steps,  Fred True will work on this.

There is considerable interest in performance experience, and in particular in breaking the performance down into components (e.g. DNS, syn/ack, application/server response etc.). This would be done with active/passive measurements and may require to instrument the network, the client & server. Then can address how to characterize the components and where there may be bottlenecks or instability and see how they can be addressed. 

What sort of things are people interested when addressing workload characterization. One axis might be the application. One wants to see this historically, and today. Multi-media is difficult since they do not use well known port numbers which makes characterization difficult.

Who will get access to what kind of data, and who decides, who owns what data.. Can one break it down into data types, e.g. aggregated, BGP, detailed flows ... Will need a more formal definition of participants if one is to make data available to the participants.

Highlights of day 1 - Fred True

Want to continue to make active measurement of performance experience metrics, delay, loss, jitter, bulk transfer. We have several tools to do this and there are IETF standards.

On the passive side we need the topology corpus, workload (traffic characterization), user behaviors/characterization, traffic/characteristics along a multi-hop path, protocol characteristics and behavior.

We also want to correlate active and passive measurements, do active measurements affect the passive measurements and vice versa.

We need some initial applications/milestones etc. Possibilities include:

Want to monitor end-to-end performance of the network, e.g. defining SLA metrics and how to measure etc. This should include application and OS components.

Network Data Warehouse Project - Ted Johnson, ATT

Trying to develop a database language etc. for handling large volumes of data. The data consists of TCP/IP headers, netflow, performance measurements, configuration data. They are used to study IP network performance, traffic maintenance, routing simulations, load analyses. They are also useful for ad-hoc studies, negotiate peering arrangements, study customer trends.

Data is collected at edge routers and transmitted to a centralized data warehouse and analyzed by scripts etc. 

Problems are there is inadequate support for queries, many net queries are not suitable to express in standard database languages. This leads to the use of procedural tools such as Perl, this leads to difdicult to write, use understand. A second problem is the excessive data traffic, edge routers can geneate 100's GB/day.

The solutions are to develop an advanced OLAP tools such as EMF-SQL that provides complex group definition and aggregation. A second solution is distributed query processing which allows the data to be stored locally and move the query to the data, not the data to the query.

The language (Egil) is a small extension to SQL to allow complex aggregation queries, e.g. couont the number of concurrent flows between each source and destination IP by 5 minute intervals. An example of an Egil query is shown below:


Select ...

Group By SourceIP, DestIP, Interval=['00:00' to '23:55: by '00:05'] : X

Such That

   X.XourceIP=SourceIP

   and X.DestIP=DestIP and X.StartTime < Interval + '00:05' 

   and X.EndTime >= Interval

Having 

   sum(X.Bytes) > 1.5 * sum(Y.Bytes)/5.0

Status of Egil is that the initial version translator is completed. What database is used is a separate issue. Currently uses the Daytona database. The functions are done by the layer on top (Egil) of the database.

Multicast Inference of Network Characteristics (MINC) - Nick Duffield, ATT

http://www.research.att.com/projects/minc

Goal is to infer the internal network characteristics, e.g. link loss, delay, network topology from end-to-end measurements of packet loss and delay. The assumption is that there is no participation by the network apart from forwarding packets so do not need administrative access to the network components. Idea is to infer across diverse domains crossing multiple ISPs. It complements other approaches such as ping, traceroute, pathchar but no ICMP responses needed from routers, complements network-active approaches IPMP, mtrace, here no extra router action or state.

Principal is to correlate measurements on intersecting end-to-end paths to reveal characteristics of their common portion. 

Ingredients: NIMI multicast measurement probes, statistical methodology to correlate measurements and infer link behavior, web based visualization tools (MINT), new directions are to use unicast tools and see if can do passively.

NIMI has measurements tools zing, traceroute, treno ... with 40 hosts worldwide (volunteers needed).

Using multicast since maybe more scalable than unicast (N**2 paths between N fully meshed hosts) for active measurements. Also a single packet splits into multiple paths so there is a common delay on the first link of  a path etc. The model assumes independent link losses, delay (there is some robustness wrt variation in independence). The data are traces of end-to-end loss, delay (per packet). They use maximum likelihood estimation to find parameters that maximize probability to obtain observed data.

They have compared their loss measurements to those obtained using mtrace. Mtrace requires router collaboration and allows one to find the topology. They get good agreement. They used 100ms probes, and got between 30% and 3% losses. They have simulated delay measurements (don't have GPS at all sites to enable real measurements of one way delay), get good agreement to 1 part in 1000. 

With NIMI have made ongoing mcast connectivity measurements since 4Q99 4 time daily, also are doing ongoing loss measurements.

New directions:

Measurement techniques for content distribution networks (CDN) - Oliver Spatscheck, ATT

ATT has a CDN on a trial basis since Jan 2000. Initial measurements are based on CDN logs, with a small number of clients. The initial analysis reports include clustered client population, load on CDN (e.g. number of HTTP requests from a single customer as a time series, unique IP addresses in 60 seconds as a time series, where are customers) and performance of CDN. Measure latency to get a fixed page from various areas. 

The CDN network consist of clients, the Internet and CDN nodes. Each CDN node contains an access router, a level 4 switch and caches. They want to measure number of hits, network bandwidth per cache, per switch, per ICDS node, total. They also want to know the latency of and hits on the cache, the L4 switch, the network and the client. They want to get the DNS hidden load factor, each DNS resolution generates a load on the caches.

A question is where to gather the information. Measuring on the cache requires no added hdw, all successful client requests are observed, there is easy access to higher layer info, captures user trends. the drawbacks are the performance hit on the cache, the measurement infrastructure is on the critical path, slow turnaround for new measurement tools. Measure on the switch requires no added hardware etc. but there is a performance impact on the switch, has to reassemble higher layer information, long delays in adding new features, even worse for routers. Measuring on the client make it easy to access higher level information, it measures performance seen by user. However, one needs to deploy a large number of clients to get representative results, high initial cost, high operation overhead.

So they are going to use a sniffer. It measures all traffic reaching the CDN, it is not on the critical path. The cost is pretty low ($15K) provides analysis of higher layer statistics in real time, flexibly, inexpensively, non-intrusive. Based on Dell PowerEdge 2450 (733MHz PII, 2GE (FDX one for each direction), 1 FastE for mgmt, 1GB RAM, 4*9GB RAID), and a passive optical splitter. 

Needs real-time aggregation to reduce data volume. Dell PowerEdge 2450 support only 360Mbps for 1460 byte packets (generates too many interrupts, probably mainly a PCI 32 bit bus limitation is fixable then will probably run into cache invalidation, the bottlenecks are on the receive side)  running unmodified Linux. Optimize data path in sniffer to support high speeds. Provide easy to use interfaces to high speed data-path. Algorithmic challenges are to construct useful applications layer information on the fly at Gbit rates, deal with re-ordered packets, deal with lost packets.

Initial goals are to measure: how many connections are not accepted; how many are not completed; what CDN performance do users experience in latency, throughput; classify failures based on client cluster, switch used, cache used. Infrastructure provided should allow the fast implementation of additional measurements.

The sniffer has to be secure. Looked at Niksun, Narus, PacketScope, Network Associates. Felt too long delays to add new features.

IPEX data analysis server demo - Raghu Kacker <raghu.kacker@nist.gov> & Hung-Kung Liu <liu@nist.gov>

http://statserver.statsci.com/statserver/aoclientwiz/demos/nist/nistplot.htm

http://statserver.statsci.com/statserver/demos/

Can choose source, destination, packet size, start hour, end hour, start date, end date, box plot or Auto Correlation Function plots.

In discussions suggestions were made to add loss, distributions (PDF, CDF), and spectrum analysis. Other suggestions were to add an upload capability, provide password access to identifying the sites (they are currently sanitized and identified by number). Their server does allow access to remote data via SQL queries. This might allow NIST to use its StatServer (StatServer is quite expensive to license) to analyze SLAC's data repository. Splus 2000 (from StatSci) provides seamless access to Excel.   

Discussions 2

Need to make some choices of: hardware for measurements (active & passive), software (e.g. NIMI or something from HP), how to do node management, OS. Need to come decisions on this so we can start asking for the equipment. 

Hosting site requirements, e.g. GPS

Could implement PingER functionality on the NIMI boxes. This would allow XIWT to extend its existing active measurement infrastructure.

How does one distinguish IPEX from other persistent/consistent measurement infrastructures (PingER, AMP, Surveyor, NIMI were mentioned). One factor is that most of the others are aimed at academic and research networks and DARPA is interested in the "public" Internet and the XIWT consortium does bring together players that do not normally work together.

We need to do something more in the active measurements, i.e. extend PingER, what new active measurements to make.

Need a white paper 2 weeks before the DARPA PI meeting at Albuquerque to state what the IPEX will focus on. Fred and Chuck will do this.


[ Feedback ]