Rough Notes of XIWT/IPWT Meeting at Intel 11/13/97
They hope to have 10 sites running the PingER tools soon. At the moment Bell South (2 site running), SBC (in process soon), Digital (2 in process soon), Houston Associates (1 soon), US West (1 site running), West Pub (running), CyberCash (to come), TC Labs (to come), HP (running), NIST (2 running), CNRI (1 soon). This comes to 15 collection hosts. We need at least 15-20 sites (rule of thumb) to allow aggregation of non-Gaussian data, thus one needs 15-20 site for each aggregation (e.g. for US North-west region would need 15-20 sites).
It was agreed to do full mesh pinging between collection sites. There was a discussion of the value of pinging the root name servers. The choice of sites to ping is predicated on how one wishes to use the data.
Goals of Measurement Project
does one want to focus on response time, throughput, reachability, packet loss - the data format is recorded in a self defining way so increasing say the number of 100 byte pings/sample to improve granularity (currently 10) or reducing the number of 1000 byte pings/sample to conserve bandwidth should not be a problem;
setting up service level agreements;
problem identification/recognition (within ~ 15mins if wish to be real-time pro-active i.e. detect before the user complains, in addition one needs to balance the network loads versus how soon the problem should be detected, the time frequency may depend on the criticality of the remote site), prediction, pinpointing location/cause of problem;
getting an extended version of the Internet Weather Report;
what are you getting in the way of performance;
what is the Internet industry at large getting in the way of performance;
getting baseline measurements to get normal behavior expectations;
configuration planning (when & where to request improvements in bandwidth, connectivity, peering etc.) requires long term trends;
coming up with a "SpecMark" for ISPs, but one has to be very careful with legal problems;
establish confidence & standards for the tools and measurements, this is needed to improve communications with ISPs and providing a common language etc.;
improve the performance for users/customers (internal & external);
providing the raw data to others for mining might allow more insight into the meaning of the data.
There was a discussion of whether we should be measuring important application (e.g. Web, Internet phone, real audio, video) performance. This appears to be a separate goal, which would probably need different tools (e.g. HTTP GET) and problems might be addressed to different people (e.g. address ping performance to ISPs, address Web performance to the WebMasters at the remote sites). The end point one wants to get to is to understand customer perceptions of application performance, however one needs to start reasonably simply so the initial focus is on the network performance via the
Data Needed to Address Goals:
site connectivity information including ISP, link changes, with its history;
the host information including IP, platform, OS (version), latitude / longitude;
synchronization between sites;
measurements to include international sites;
full mesh pings between XIWT collection hosts;
also ping some common hosts such as root name servers, 5 Merit route servers, hosts at exchange points, in addition each collection site should provide 5 important sites to it that everyone will ping; so can aggregate the data to see common to meet the goals;
in order to address the aggregation by groups there will need to be the ability to select the data by groups of sites;
decided to keep same parameters for data collection as the ESnet sites use, i.e. half hourly intervals, 10*100 byte pings, 10*1000 byte pings
it is important to use a consistent ping base tool that behaves identically in all (including pathological) cases, this may require root privileges.
Measurements Other Than Ping
DNS response is already separated from the ping performance. Does one want the DNS performance and if so how does one get it, e.g. which name servers is one measuring the response to etc. It was agreed that this should be a separate future project.
Getting http get response times is a future goal.
Quality of service related data is a future goal.
Traceroute has 2 contexts:
- Supplement to ping analysis and as a manual reactive debugging tool. It was agreed that to enable this we should set up reverse traceroute servers at the major sites and also recommend them at the remote sites, and publish the URLs.
- Long term to understand what the routes, look at the pathologies, see route changes etc. The analysis is quite a challenge. For example how often should one record the traceroute (Paxson's work showed that about 1/3 of the routes change daily) and can one automate the frequency based on how stable the results are. One could measure the traceroute once a day (at a random time) and archive this information. This would give an indication of the predominant route. This might be useful to correlate versus long term performance trends if one say saw a big change in performance (when averaged over a day or a month) this might correlate with a change in the predominant route. The downside is the extra network load for measurements that may not be used or analyzed, or that may not be appropriate for the analysis needed (i.e. we need an analysis plan or a designed experiment before taking the data). Also since it probes routers it might affect the router performance and network more than pings. It was proposed that any individual sites interested in collecting such data and providing analysis are encouraged to do so, but it is not a group task. As a group task it was agreed to add it to the list of future goals.
An Overview of NIMI (National Internet Measurement Infrastructure) - Vern Paxson, LBNL
NIMI is a collaboration between LBNL & PSC. It is a NSF funded (for 1 year) pilot project that is also supported by DOE. It includes the development of tools with the goal of widespread deployment of measurement platforms for diagnosing performance problems, baseline measurements to see how traffic evolves, assessing performance delivered by IP clouds. The general model is to have platforms situated at the borders of clouds and measuring the traffic through the clouds with an N**2 type measurement. The original deployment for Vern's thesis included about 20 sites in Europe, U.S. & Asia (~ 380 pairs).
The design goals & constraints are:
- Use active monitoring;
- Does not depend on the cooperation of the clouds;
- Support of a wide range of active measurements;
- Scale to >> 1000's of platforms (avoid success disaster, c.f. the Web);
- Platform owners have full admin/policy control;
- Solid security/authentication in from the beginning;
- Platforms require minimal administration, maximal self-configuration.
Making a measurement requires basic requests:
- Schedule future participation in a measurement;
- Pick up measurement results;
- Delete measurement results;
- Cancel or suspend a schedule;
- Query the current schedule.
- Requests to a NIMI include the name of a credential and a corresponding cryptographically-secure signature;
- Authentication based on ACLs;
- Rows correspond to credentials;
- Columns correspond to actions;
- You control access to a platform by controlling who has the corresponding credentials
- Requests are encrypted to avoid session high-jacking;
- Implemented on top of RSAREF.
Auto-configuration goal is to keep platforms lean & mean:
- Each NIMI has one "master point of contact";
- In the minimally-configured state, the platform just has a single ACL;
- When a platform is initialized it contact the master and asks to be initialized;
- Master downloads the kernel of an ACL table & redirects platforms to other configuration sources;
- ACL structure facilitates this using sub-tables.
- Two distributions planned: cookie-cutter & general Unix
- Standard platfomr (<$3K):
- 200MHz Ppro
- 64 MB RAM
- 4 GB disk
- 10/100 Mbps Ethernet
- Standard OS: FreeBSD 2.2.2
- Administrative access: ssh
- Optional GPS card (avoid mandatory due to antenna problems). Will run comparisons between GPS & NTP measurements. Vern suspects the NTP will be good enough. They have paid a lot of attention to making sure they understand NTP (e.g. avoid clock skips) and they have access to the FreeBSD developers who implement & understand NTP.
- Perl prototype running on PSC, LBNL platforms
- Includes autoconfig, ACL-based authentications, security, running measurements
- Does not yet include: management of results, querying
Poisson Ping ("Poip")
- Provides one way measurements;
- Sources/sinks UDP packets transmitted at Poisson intervals (or uniform or periodic);
- Uses generic "wire-time" library - interface to libcap;
- Myriad sanity checks on packet integrity;
- Packet headers include: version, type, length, sequence, timestamp, MD5 checksum over payload;
- Uses Anderson-Darling A**2 test to check sending times to provide self-consistency checks (were they exponential, uniform, in general due to time granularity they may not be exponentially distributed);
- 2000+ lines of code.
- Near term deployment at DOE sites;
- Longer-term deployment by volunteer ISPs (e.g. WorldNet);
- Measurement orchestration, analysis frameworks (e.g. how to request data for a certain time frame);
- Archiving results: reduction, navigation, visualization issues;
- Integration with Surveyor, IPMA (Craig Labovitz, Merit) projects. The emphasis for NIMI is to provide an infrastructure on which, for example, Surveyor or other tools could be installed.
Meaty Research Issues
- Self configuration: the mapping problem, how do a set of NIMIs recognize that they have something in common (location, paths to core etc.);
- Integration with HOPS (Paul Francis), SRM;
- Going from individual results to problem diagnosis (very hard problem);
- Aggregating results into higher-level statements about regions (get infrastructure to tell you there is something wrong in say Colorado).
- Holy grail: end-user clicks on a button to diagnose a problem; this queries a distributed database; maybe causes additional measurements to be made and added to the database.
What Analysis & visualization capabilities are needed?
After some discussion it was agreed that it may be premature to define new capabilities, rather it might be more rewarding to see what exists and try it out with the data XIWG are going to gather.
What does SLAC/HEP do?
- Each point is for a 30 minute sample for selected time window;
- Each point represents one day, going back for last 180 days;
- 10 week moving window for weekday data;
- Each point represents one month going back for years;
- Average Response time (for 100 & 1000 byte pings);
- Average Loss (for 100 & 1000 byte pings);
- Unreachability (host does not respond to any of 10 pings);
- Frequency of quiescent network (zero packet loss);
- Unpredictability (best / avg response, best / avg ping);
- Prime time (at SLAC);
- Weekends vs. midweek;
- Tables with sorting, links to drill down to more detail (e.g. raw data, plots), output to Excel, coloring to identify quality;
- Graphs with spline fits to aid the eye in seeing trends.
Strong time of day (e.g. night better than day) dependencies indicate queuing and that the link is in need of improvement.
Compare customer expectations for metric and it variability versus the actual measurements, this is related to something known as the Cpk (pk = process capability). Cindy Bickerstaff of Intel will provide information on this, what it requires and how to calculate it.
This is a two & half years old start up with 18 people. Mission is to measure Internet performance. Provide raw data & analysis. Flagship product is Keynote Perspective. They basically measure the response time to get an URL, repeating the measurement every 15 minutes. Find order of magnitude difference in time to download a Web page, i.e. a given page at a customer site is downloaded from multiple (39-40) Keynote locations. Keynote locations in a given city may have multiple ISP connections. The Keynote boxes are located at Web collocation sites with national backbone connections (to eliminate problems caused by the monitoring site having a poor connection to the Internet) and remotely managed. They provide a plot of y = time, x = measuring site and sort to see which are worst over some time frame. They are seeing an increase in sites with connections to multi ISPs. The ISPs are reselling Keynote measurement services and so are willing to collocate Keynote sites. They also work with Keynote customers to identify the effect of changes (e.g. change the peering). The alarms also add traceroute information. They attempt to identify the effects of the LAN and server/database performance and separate it from the network performance by monitoring response time locally and remotely.
- Understand the web site visitor experience;
- Track competitors web site responsiveness;
- Internet management tools;
- Generate email/pager alerts;
- Provide trend analysis (peaks & valleys);
- Isolate & document access problems;
- Understand access problems in real time.
Service Delivery Options:
- History up to 6 weeks or 24 hours timeline;
- Views can be user configurable, metro timeline, maps & pie charts;
- They can provide optional alerts, today there is not much help on what is the cause of the problem.
Monthly Service Subscription:
- Web Edition prices: $295/month per URL, 3 month initial commitment, alarms at $95/month
- Desktop prices: $495 per URL, $795 per month per URL coupled with Web Edition, 3 month initial commitment, capability to analyze data in detail, includes alarms.
The customers want a single easily understandable quality metric that describes the user experience. Thus they are confused when people discuss detailed meanings of statistics, so using terms like medians and inter-quartile distances does not help understanding. Keynote today uses simple averages and standard deviations. site. This running mainly includes ensuring the data has been gathered and archived successfully and helping new collection sites get started. The ESnet community has not seen a need for multiple publicly visible archive sites. A reason for such multiple redundant sites might be for faster access to the web
What are the requirements for the archive site. According to Dave Martin of HEPNRC, probably about 25% of an FTE is consumed just running the ESnet archive. This FTE percentage includes helping new collection sites get started, and ensuring that the data is accurately gathered in a timely fashion. Other costs include the hardware costs and the SAS license.
Is there a requirement for multiple connection sites? ESnet looked at this and decided not to do this, it did not seem to provide sufficient benefits to justify the extra costs. The benefits could allow faster access to the Web pages from different parts of the world by mirroring data, or for load sharing. Neither is compelling at the moment. It may be useful to have an additional (non public, and not necessarily complete or current) copy of the data at a site which is developing analysis tools.
CNRI will try and get the archive site running by the new year. They hope to get a Sun/Solaris with a SAS license.
The members agreed to share the data / information gathered amongst themselves. They may make some of the information publicly available after suitably making it anonymous.
A small subgroup of 3 people agreed to look at instrumenting Web servers (e.g. by providing extra information in the log) to provide better passive measurements of performance. Cindy Bickerstaff will head up the group.
There was also interest in setting up a subgroup to look at what metrics are needed to set up Service Level Agreements (SLAs) with ISPs. For example, ISPs will probably not accept ping measurements to things they do not control, e.g. to a customer site host.
Are there any controlled experiments that can be made by perturbing the system and looking at the effect?
Goals of next meeting
Have collection sites running. Share experiences. Discuss how to analyze and visualize. Intel is interested in analyzing the data with some of their tools which they are working with lawyers to make public.
Have archive site running.
Time of Next Meeting
December is too soon. Late January was proposed. There is an XIWT meeting in Austin in early February (3rd and 4th) that some members of this group (XIWG/IPWT) will attend. Another possibility would be to do it in San Diego, Tracie Monk offered to host a meeting in San Diego as long as somebody will sponsor it to cover the refreshments etc. costs. It was agreed to make it in Austin on the 2nd of February.
There should be a conference call the week of January 14th at 1pm EST. Try to put presentation materials on the Web before the conference call.