Authors: Les Cottrell. Created: March 26, 2000
|Traveler||Roger L. A. Cottrell, Assistant Director SLAC Computing Services, SLAC, POB 4349, Stanford University, California 94309|
|Dates of Trip||March 23 - April 4, 2000|
|Purpose of Visit||To attend the 47th Internet Engineering Task Force meeting in Adelaide Australia. Gave talk on IPv6 monitoring. Attend the Passive and Active Measurements 2000 workshop in Hamilton New Zealand. Presented a talk on Internet Performance Monoitoring for the HENP Community.|
Page Contents for IETF Meeting
|Introduction||IP Performance Measurements||SNMPv3 WG||IAB Wireless Workshop BOF|
|Voice Profile for Internet Mail||Security Issues in Network Event Logging||Wireless Access Protocol||ICMP Traceback|
|Telephone Number Mapping||IPv6||End-to-end IP cellular connectivity|
Each day consisted typically of a 2.5 hour morning session starting at 9:00am, two 2 hour afternoon sessions and a 2.5 hour evening session ending around 10:00pm. Each session consisted of about 7 parallel areas covering: applications, general interest, Internet, operations and management, routing, security, transport and user services. Typical sessions were devoted to working groups developing a protocol to meet emerging Internet requirements. Much of the work is done between meetings via email lists, so the typical session would cover progress so far, proposed next steps, particular issues. Working groups of particular interest to me included: Voice over IP including how negotiation is done for setting up and controlling calls (including auto-redial, roaming/mobile users) with the Session Initiation Protocol (SIP) & H.323 protocols, Telephone number mapping, IP Performance Metrics, IMAP extensions, Differentiated Services, IPng, SNMP v3 including configuration management, and ICMP traceback.
The big topics (in terms of sessions) were Internet telephony (10 sessions), security (10 sessions), wireless and roaming (5 sessions), IPv6 (4 sessions).
In several places in this document I refer to IETF drafts (draft-ietf-... or draft-iab-...). These can be found by going to http://www.ietf.cnri.reston.va.us/ID.html
VPIM is now at version 2. It was approved for a proposed standard in Sep-98. They need some interoperability testing of the various implementations. They are now working on parts of version 3. There is a related ITU-T standard E.164, G726. There was a long discussion on how to deal with non-implemented features (such as something called Vcard which sounds like voice business card). The resolution was that such features can be ignored but the application must not get itself into problems. Typical phones are represented as email@example.com (e.g. this might be how to send a FAX, or Telalert to a gateway server). There are also proposals for a VPIM directory including an LDAP directory. See http://www.ema.org/vpim/conformance/
Inter-voice mail system messaging is different than Internet mail. Unified messaging servers are simply proxies for Internet mail., Issues are privacy, sensitivity, people can use the client of choice defeating privacy (e.g. the sender did not want the voice mail to be forwarded or saved), sender or subscriber may not care about un-renderable parts. Delivery semantics for Internet mail the message arrives completely or not at all. There are issues about partial non-delivery notification (e.g. client cannot render some parts). Another issue is when to turn off the stutter dila (i.e. the message has been read/acknowledged). We seem to be moving to email with voice mail attachments, rather than voice mail with other attachments.
Internet Voice Mail (was VPIM 3), i.e. voice mail behaves like email (c.f. less practical original design that email behaves like voice mail). .The IVM has one codec, is optionally backwards compatible with VPIM v2. There are new features for all email: primary critical content indication and partial NDN (Non Delivery Notification). The goals are interoperability, conformance to existing standards (e.g. FAX), backward compatible (this appears to becoming less important), and robust. Some concerns were raised about a Phillips patent (4932061) on GSM 6110 encoding which does not expire soon.
How does one integrate VM systems into Internet Mail infrastructure, e.g. VM systems don't have much of the capability of Internet mail. So how does one exchange messages with devices of lesser capability, yet there is a massive capital investments in VM, or email systems (such as AOL) which have very limited capability. Have to notify the sender that some parts of the message are undeliverable (or unrenderable). This is done by Partial Non Deliverable Notification (PNDN). This is difficult since the delivery may be to the MTA (e.g. sendmail) but the MUA (e.g. Netscape communicator) may not be able to play some enclosure, and the MTA & MUA may not be integrated. So more work is needed on how to interpret reports (e.g. PowerPoint not supported versus slides not deliverable), how to identify parts of the mail. Maybe we need to step back and decide what was meant when we sent a message with something that can't be read, or even only partially read, i.e. at sending time one needs a better idea of what the receiver(s) can accept. May need message profiles, which people (receivers) can subscribe to, and which are used by the sender to decide what/how to send. But this needs content negotiation.
See draft-ietf-diffserv-tunnels-00.txt Diffserv & Tunnels, it is an 8 page draft at the moment. When send diffserv packets through a tunnel there are 2 headers, and intermediate nodes only see the outer header. It is possible that the diffserv information needs to be carried though the tunnel. There are 6 traffic conditioning applying locations: before encapsulation, after encapsulation but on inner or outer header, also can apply at end of tunnel on inner or outer (does not make much sense to apply to outer since decapsulation will remove), and after decapsulation at the end of the tunnel. Issues include: that configured/provisioned VPN tunnel makes ingress and egress nodes "virtually contiguous"; will need to add tunnels and updates to related RFCs and will take a while and involve other WGs; there will be input to eventual revision of RFC 2475. A desire was expressed to coordinate with other tunneling protocols (e.g. MPLS) or even generalize it so it covers all tunneling protocols. The sense was to publish this an an informational document (no requirements).
There are 2 documents one for the model, the other for the MIB. There needs to be closer alignment of the Model document with DiffServ MIB and PIB, e.g. one talks about counter another as monitor, there is different interpretations of the token bucket, should the discussion of MPLS be reduced.
Rules + Mechanisms => Services (Dave Clark). They are defining the BA and make it clear that a BA gives a way to specify a set of rules we are using & the particular characters that will result to packets. It is not required that BA be used in a DiffServ network, i.e. they are not standards track. They may define reference BAs (see section 7.0 of the draft). A BA might be "best effort" although this may be a term that is overused and a misnomer, maybe "economy" would be better.
The authors are Van Jacobson, Kathy Nichols & Poduri all from Cisco. Objective is to carry circuit oriented traffic telephony over the Internet, ie it is a circuit replacement. There has to be more bandwidth in the cloud than in the edge circuits. There is a jitter window that is the time between the start and end of a packet in a stream. This depends on the clock rate at the ingress and egress and the mss. Packets can be reordered in the cloud due to queuing. With EF PHB, and same sized packets and same rates for each customer, then each customer fits in its jitter window, independent of the service at Each customer stream can be jittered by any other customer at most once, independent of topology. With different rates then can move one customer outside its jitter window. There are 3 ways to deal with this.
Typical codecs deliver a packet every 20msec. 20 ms at 100Mbps is 250 kbytes (~150 MTUs, ~1000 G.7111 calls or ~ 4000 G.729). Worst caser jitter from other traffic through a diameter 5 enterprise is 5 MTU times or 0.6ms leaving 19ms (optimisitic) to 10 ms (ultra conservative) for EF. If ingress can police both packet and byte rate, can admit ~700 G.711 or 3000 G.729 calls. Otherwise can admit ~ 100 calls.
The problem is to define a DNS based architecture and protocols for mapping a phone number to a set of attributes (e.g. URL) which can be used to contact a resource associated with that number. Operations and provisioning are out of scope.
An example is there is a cell-phone number which is the input and what is wanted are the URLs are associated with this. For example since it is a cell phone associated with this are a phone, person who has email (e.g. 6509262523@DOMAIN.TLD), a web page, FAX. They want to use the DNS as it exists but need to define how phone number is converted to a URL.
There are ongoing discussions between the IETF and the ITU. See http://www.itu.int/ITU-T/ip-telecoms/presentations/ipw-4.ppt or more generally http://www.itu.int/ITU-T/ip-telecoms/ip-telecoms.htm
draft-ietf-enum-rqmts document will be closed in 2 weeks time.
There is a proposal on how to map an E.164 number into a domain name. e.g. +1-650-926-2523 and create domain name as 18.104.22.168.22.214.171.124.5.6.1.e164.int (i.e. number is reversed). It is proposed to use traditional DNS services to get a list of URLs from NAPTR and get service access point from SRV. The consensus is that this scheme is the right way to do things since it allows full delegation for the distributed DNS (i.e. the client tries a top level e.g. e164.int who then delegates to the .1. domain (US entry code), who then delegates further etc.) There was a question on whether the lookups can be done fast enough for the phone system. The answer was that there have been such tests and the DNS lookup is fast enough to meet the dialing requirements. Another question was how doing a lookup for every digit (i.e. worst case, with no cacheing) for every number dialed would scale. There have been studies that show that cacheing is an enormous help.
This proposal only provides a listing service, it has for example nothing to do with gateway discovery or how the call is routed, and for example it does not cover how emergency numbers (e.g. 911) are processed. May need to add information to the document to indicate that performance, administration issues have been thought about. This may be an informational draft.
Additional requirements include:
They tested out a 3 layer administrative model.
They separate the levels into clearly separate and independent services. They used the PTR to designate the delegational authority at top-layer query. The second later with info request specific to a desired service (SRV, MX, NAPTR). The 3rd layers is service specific and out-of-scope of ENUM (LDAP, RESCAP).
Web pages: http://www.advanced.org/IPPM/
Chairs: Matt Zerkauskas <firstname.lastname@example.org> and Will Leland <email@example.com>
Agenda: Loss Patterns (Rayadurgam Ravikanth), IPDV (Phil Chimento), Periodic Streams, Bulk Transfer, Working group future.
See One-way Loss Pattern Sample Metrics
Only minor changes since last meeting in Oslo. It will be re-issued this April. Soliciting feedback on mailing list. Metric is directly derived form the loss, should have been part of the loss RFC but too late. They have not implemented the metric. For voice it works well to predict user perception. They try to capture whether the losses are bursty. Also look for loss bursts that are closely spaced. We need experience in using it, it could be put out as experimental or information to try and get people to use. Would be useful to give an idea/guidelines of how to use these metrics, e.g. frequency of measurement.
See Instantaneous Packet Delay Variation Metric for IPPM
There were some rewrites of clock-related errors. There is also a treatment of undefiuned delays, they also removed the bi-directional variation in unsynchronized clocks on round trip IPDV. ITU has a draft I.381 looking at jitter in bounded intervals (e.g. 5 minutes). Looking at one service provider (SURFnet, an NREN), 2 stub nets (RIPE & University of Twente), Surveyor measurements don't cross ASs. Measurements average 1/sec. Averages are close to 0+- 20usec. Assume infinite delay packets are not interesting for IPDV, i.e. used arrived packets. Pairs with one or both infinite are skipped. Claim this provides a conditional distribution and conditional expectation (given that both packets of the pair arrive at the distribution). May want a constant bit rate. Questions are: what is IPDV telling us about he network, does IPDV have any predictive power, how is IPDV correlated to delay. Next steps are to look at dynamic behavior of both delay and IPDV which means doing a time series analysis.
See Network performance measurement for periodic streams
Network Performance Measurement for Periodic Streams, see draft-ietf-ippm-npmps-00.txt. May address applications such as streaming audio, video, VoIP etc. which have steady streams of data. This is in contrast to Poisson distribution of sending times. A key difference is that RFC 2670 uses a singleton may only provide some parameters of interest; a sample would provide more (out of sequence, duplicate, spurious, etc.) First draft went out last week, need to discuss difference or progression form a "singleton" to a "sample". Will use Poisson inter-arrival between packet streams since multiple packet streams may be necessary (samples of samples). This idea of streams will probably apply to all the metrics.
See Current draft on the table: Empirical Bulk Transfer Capacity (The framework document; I think this one
is ready to go, but personally would like to see at least one metric.)
For background only, a version of the Treno BTC document.
**THIS HAS EXPIRED, AND IS NOT CURRENTLY A WORKING GROUP DOCUMENT.**
BTC was one of the driving forces behind IPPM, yet still no metric. BTC framework along with metrics proposed. Treno metric expired. Mathis ran into problems with the details in particular to do with out of order packets. Mark Allman has produced a program ("cap") for TCP studies that could be used as a metric. He wants to show that BTC metrics relate to actual implementations, plan by end of summer to have a draft of how cat relates. This has been built into the newly revived NIMI (see http://www.ncne.nlanr.net/nimi/). Does industry/customers need such a metric (the answer to this appears to be a definite yes), is it ready to be standardized, is there anyone with an alternative they would advocate?
The IPPM exists to define unambiguous useful metrics. The key uses are SLAs, net operations, net & traffic engineering. The focus is on the transport-level, not on specific applications; a "basis set". Work to date RFC 2330, connectivity RFC 2678, one-way delay RFC 2679, one-way loss RFC 2680, round trip delay RFC 2681, IPDV draft ready for last call, loss patters - draft for experimental, periodic samples - draft, bulk transport capacity - draft framework ready, Treno (2 different draft formats, both now expired). Is the set complete or missing fundamental area? For the areas are these the right metrics? Do we keep waiting for bulk transport (good chance, but not certainty, something this year).
We could produce a gazillion small specific metrics, Leland thinks NOT. Alternatively we could say it is complete and go to sleep awaiting metric standards advancement: goal everything finished by next IETF, wake up the group for new metric proposals, wait for the BTC. To continue one needs people to be excited enough to put in the time to develop the new metrics.
Is there interest in comparing implementations, in validating the measurements and getting an idea of how much variation one might expect from measurement to measurement. Also is there interest in producing user reports such as weather maps.
Syslog is a very useful mechanism for reporting events but formerly it has not been documented. It also has undocumented security problems such as unauthenticated messages. The purpose of the proposed working group is to document the existing syslog protocol, and explore developing a standard to address security. Verifiable delivery, message integrity and authentication can also be explored. The timelines proposed are to put out an Internet draft in May 2000 on the observed behavior of the syslog protocol with the idea to get this on a standards track. In July 2000 put out an Internet draft on authenticated syslog also for a stnadrads track. In August 2000 put out an Internet draft for an authenticated syslog with verifiable delivery and message integrity. In December 2000 revise the drafts as necessary and advance.
"Realm Specific IP" (RSIP) is used to characterize the functionality of a realm-aware host in a private realm, which assumes realm- specific IP address to communicate with hosts in private or external realm." this definition comes from RFC 2663.
There is a lot of excitement in this area following a recent big IPv6 meeting
in Teluride Colorado, and Microsoft's announcement that Windows 2K will be
shipped with IPv6 in 2 years. I discussed this with Tony Hain (ex ESnet, now
with Microsoft as the QoS Program Manager) and his view is that IPv6 will be the
default (i.e. no need to select) and this will dramatically enable the
deployment of IPv6. Microsoft are working with applications vendors to port
their applications to use IPv6. The view is that over the next few years, conventional computers on the Internet will
be joined by a myriad of new devices, including palmtop personal
data assistants (PDA), hybrid mobile phone technology with data processing capabilities, smart set-top boxes with integrated Web browsers, and embedded network components in equipment ranging from office copy machines to kitchen appliances such as refrigerators, ovens, TVs. Many of these devices will have web servers, e.g. to allow a maintenance technician to remotely check maintenance/repair records, performance etc., others (e.g. cellphones) will have web browsers to allow access to information. Thus there will hundreds of IP addresses in each home. Many ISP's charge by the IPv4 addresses today, and may not wish to lose this source of revenue even when addresses are virtually unlimited, so they may have a reticence to move to IPv6. There are concerns over the alternative solutions (e.g. network address translators (NAT) that preserve IPv4 address space by intercepting traffic and converting private intra-enterprise addresses into one or a few globally unique Internet addresses) being used to provide increased address space which have problems with deployment of end-to-end security, scalability and complexity. Further, IPv6's autoconfiguration features will make it feasible for large numbers of devices to attach dynamically to the network, without incurring unsupportable costs for the administration for an ever-increasing number of adds, moves, and changes. The Internet Architecture Board (IAB) has put out a document on the Business case for IPv6, see draft-ietf-iab-case-for-ipv6-00.txt
There are drafts for IPv6 over most network layers such as token ring, FDDI, Ethernet, ARCnet etc. They are putting together a proposal on how to pick source addresses from multiple domain name results. Another active area is how to put multicast over IPv6. There were several presentations on how to assure access authentication, in particular taking account of roaming hosts (e.g. cell phones), and how/whether they integrate with Radius, Kerberos, various key distribution systems, DHCP etc.
There is a new draft of the pre-qualifications for address prefix allocation (6PAPA), see draft-ietf-ngtrans-6bone-6papa-01.txt. No steering group has been formed.
There is a transition document draft-ietf-ngtrans-introduction-to-ipv6-transition-03 by Alain Durand. An interim meeting has been proposed in the Bay Area for Spring 2000 (May or June). This will walk through existing proposals for security issues, review recently submitted proposals, explore new directions, and to plan how to get the industry to evaluate our mechanisms & scenarii.
6BONE registry status is being kept by David Kessens. For his talk see http://www.kessens.com/~kessens/presentations/. For queries see http://whois.6bone.net/ or the regular whois program. The number of countries registered is 44 (42 19990920) last 2 countries added were MY & ID.
At the invitation of Bob Fink, the Chair of the Nexty Generation Transition Working Group, I made a presentation of work mainly done by Warren on "Preliminary PingER Monitoring from the 6BONE/6REN". There were about 180 people in the audience. There were questions on comparing the PingER monitoring with other IETF monitoring efforts such as Surveyor; and the effect of routers not treating pings/ICMP the same as other packets.
It has a 6TAP router at the STAR-TAP. There is also a machine to accept tunnel peering with the 6TAP, they also have set stats server which will run PingER in the next couple of weeks. All the machines are FreeBSD. They will be setting up an IPV6 route server and registry using 2 donated Sun servers with IPv6 ATM interfaces. They are looking at the Merit daemomn and the IPv6 registry from David Kessens. Next they want to provide 6to4 service, and a PingEr monitor. See http://www.6tap.net for establishing BGP peering sessions and a looking glass for routing information. The 6TAP paricipants include APAN (Japan/Korea), ESnet. NTT-ECL, CA*net ...
TLA == Top Level Aggregation (a 13 bit field in the IPv6 128 address used to keep the default-free routing table in top level routers in the Internet within the limits, with a reasonable margin, of the current routing technology, see http://www.cis.ohio-state.edu/htbin/rfc/rfc2450.html). pTLA stands for pseudo-TLAs a concept introduced by the 6BONE. The 6BONE work wanted to conserve TLAs and so allocated only one TLA (all bits set except the rightmost of the 13 bits) and then assigned each "ISP" in the 6BONE a number from the next 8 bits and these 8 bits are referred to as the pTLA. The issue of how the 128 bit IPv6 address is broken up in detail is still under intense discussion between the address allocation authorities, the IETF and the ISPs.
The 6BONE has grown from 150 sites in Sep 97 to > 500 today, and the number of pTLAs went from 36 to 67. The number of prefixes is ~ 100, but the number of pTLAs being exchanged is about 50. The first TLA was assigned to ESnet in August 1999. There are 14 sub TLAs. pTLA route unavailability for all pTLAs is about 20%, but closer to 5-10% for the active pTLAs. This is not considered a good result, should be close to zero on a production network. 80% of pTLAs are available in the 6BONE cloud, so the problem has to do more with a few pTLAs. He also looks at the stability (i.e. number of routrs seen/day for a pTLA versus the total number of route changes seen / day) of the most stable and least stable pTLAs. For most of pTLAs the BGP stability is better than 5%.
He concludes that the BGP4+ routing has become highly stable both with respect to stability and reliability, i.e. a good level of maturity has been reached by the current IPv6 technology. A subset of the results is regularly updated at: http://carmen.cselt.it/ipv6/bgp/graphs/index.html
They are starting experimental service with a few Root DNS servers, and a separate box for IPv6
There was only one 2 hour session on SNMPv3. There is now a proposed standard (RFC 2576) for coexistence between v1, v2 & v3 of the Internet management framework document.
There is an SNMPv3 Q&A document. It is designed to answer what does SNMPv3 do, what are the costs and benefits, data for resource planning, how does it benefit the customer, how can one sell it, why can't one use IPSec security, what is the code size, how difficult is it to deploy, manage, learn about, is it interoperable with SNMPv1 & SNMPv2, RADIUS, COPS, 3DES, Kerberos, Diffie Hellman ..., what are the recommended practices. The chair is looking for a few good people to put together such a document.
There are some new/updated implementations, see http://www.ibr.cs.tu-bs.de/projects/snmpv3, & RedHat Linux 6.1 & FreeBSD 3.3 are shipping with UCD-SNMP 4.0x. Some organization require/buying SNMPv3. A major need is to promote SNMPv3.
There was a consensus to work on a bulk transfer mechanism, a compression mechanism.
Goal to provide standards and uniformity to wireless devices such as phones, PDAs etc. worldwide. It has to work across many links layers such as GSM, CDMA, ANSI-136, and for each technology there are multiple sub technologies. They provide IP/UDP with a transaction protocol on top which provides the semantics and mechanisms based on HTTP 1.1. There is an XML compliant markup language (called WML). This allows one to describe small screens rendered by web servers in the device. These can be used for data entry (e.g. enter a phone number and then call it). WAP Forum has 200+ members. They are looking to conform with the IETF standards in the relevant areas. For more information see www.wapforum.org. Several concerns were raised by IETF members about the lack of coordination of WAP with other Internet standards.
i-mode is a wireless ISP and partial service from NTT DoCoMo Japan. It is very lightweight. It has a subset of HTML 3.0. Started operation in Feb 1999 and is rapidly growing to 5 M users in 1 year, growing at 1M users/month. There are 28M cellphone users in Jaoan, and only 17M Internet users of which 3M are personal users.
Latest versions of phones have color screens. They offer email, transactional services (banking airline reservations), database services (e.g. phone directory, restaurants ...), entertainment (games, karaoke), information (e.g. sports). The objective is to provide personal Internet web and email service on digital phone network. It has a packet mode to enable charges only on the amounts of data transferred, mail with push delivery to make phone more attractive. The web interface is a subset of HTML to lower the entry barrier for content providers by enabling them to use their existing tools and content. It also uses IP technology as possible such as HTTP, SMTP, TCP/IP.
Web pages 5kbytes (max), 2kBytes recommended, GIFs 94x72 pixels, mail 500 bytes, bandwidth 9600bps. The next steps are to provide Java in Fall 2000, this will enable game capabilities and security functions, in Spring 2001 (go to 3rd generation) looking at music & video downloads (e.g. movie promotions), interactive TV ... Will use wideband CDMA will allow up to 384kbps. Will converge to IETF Internet standards, and welcome an open discussion with the IETF. They are looking at using IPv6.
End-to-end is important (see RFC 2775). Need to make sure information is not within a walled garden, e.g. email both fixed and mobile. End-to-end over IP cellular is already here. It allows more efficient use of the spectrum. There are scaling problems of deploying IP to 500 million subscribers (e.g. need IPv6, DNS issues, mobility issues, ease of use etc.), how does TCP perform in wireless environments (wireless LAN, cellular, satellite), efficient spectrum utilization (e.g. header compression over lossy links). In the IETF more attention needs to be given to how proposed standards play out with wireless. An interesting comment was made about the established companies (e.g. Ericsson, Nokia etc.) are aiming at the business user who don't care about promoting standards and IP, whereas the Japanese with i-mode are aiming at the low end and are making very effective use of promoting wireless and IP and the standards.
This was a very packed session, probably partially a result of the previous evening's plenary session which emphasized wireless and illustrated many exciting future directions. The IAB holds a workshop once a year aimed at a topic that is of great interest to the IETF. Workshops can make recommendations but not make decisions, they are invitational with only about 30 attendees, of which about half come from the IAB and the others are invitees.
This Wireless workshop was held at the end of February 2000. The goal was to insure that the Internet Protocols are suitable for the wireless environment. The number of wireless devices could exceed that of wired devices (think of mobile phones vs. PCs) and that could be the driving factor for the exponential growth of the Internet. Another goal was to look at the WAP protocol and see what was lacking in the IP suite that it uncovered. A third goal was to provide outreach and education between teh wireless and IP communities.
They had presentations on Bluetooth, Firewire, WAP, transport & QoS over wireless, WWW over wireless and small screen devices, compression & bit error requirements. there were breakout sessions on what can the IETF learn from wireless orgs. the cost of staying with IPv4 for wireless, comparing IPv4 vs IPv6 mobile IP solutions, what does one need to do to make TCP work well over wireless. Expect to publish a report in late April. Notes of the meeting can be found at: http://www.iprg.nokia.com/~hinden/IAB-workshop
We appear to be about to embark on the 3rd generation of wireless PCSs (Personal Communications Systems). The 3G will have faster access, be IP based have seamless wireless AND wired mobility. There is something called the 3G.IP forum that will support IP centric networking and was formed May 99. 10 operators are involved: AT&T, BT, SBC, Bell South, Japan Telecom, Telnor, T-Mobil, Telia ... plus 7 suppliers: Ericsson, Nokia, Lucent, Nortel, Motorola, ... This effort is just beginning in particular to build consensus. Call control is a critical area (H.323/SIP), also important to integrate with the earlier protocols and wired (e.g. cable, DSL etc.) networks.
There has been a lot of work in this area including RFCs 2488 and 2760, plus work on end-to-end performance of slow links, implications of links with errors (including explicit error detection), TCP performance with network asymmetry, performance enhancing proxies, advice for Internet subnetwork designers, an extension to SACK option in TCP (which should help for big RTT and high loss).
Three big issues are: AAA (Authentication, Authorization, Accounting) adds a big piece badly needed by some, multi-level mobility management, mobile IPv6 should have a big part to play. There is a way to do AAA over mobile IPv4 (see http://www.iprg.nokia.com/~hinden/IAB-workshop/talks/WLIP99.ppt). IPv6 has enough addresses and security and address autoconfiguration. Some people (Steve Deering) feel the AAA architecture is misguided for IPv6, it requires a whole new infrastructure of gatekeepers, brokers.
IS-95 CDMA requires no modifications to the cell sites. GlobalStar is similar to IS-95. IS-95 is semi-connection oriented. Hardware is allocated to the call but air resources is dynamically shared. Voice delay considerations limit frame size, typical loss rates 1-2% acceptable for voice but NOT for data. RLP is a protocol optimized for small voice frames. It allows multiple frames (20 bytes each) to be put in IP packets. There is a new system High data rate (HDR) under development designed from the group up to support wireless data on 1.2288 MHZ BW spread. Forward link has a single stream of 128 byte frames (bit like ATM), reverse link has fixed time 53ms frames data rate 4.8kbps - 307kbps.
Looks for things in IETF areas that are inter-related - there are currently 100+ IETF working groups. There are strong reasons for many of these areas to be interested in wireless. http://home.earthlink.com/~seratonin/HotTopicWireless.htm Wireless has many challenges: bandwidth scarcity, your data everywhere,
Communicate pragmatically (not formal, probably avoid joint WGs) with WAP forum, MWIF, 3GPP and 3G.IP. Mutual clue injection is a key, e.g. mutual document review. Dealing with the "walled garden" model (captive customers): protocol and architecture discussions should not target this model, but foster seamless, secure, scaleable access, IP might be a good protocol to escape the walled garden. NAT is a really bad solution at this scale (e.g. NAT works with one NAT box per domain, but domains are getting very large and a single NAT won't hack it), RSIP is also dubious at this scale, consensus is IPv6 is way to go (plus app level gateways between IPv6 and IPv4), must avoid NAT for IPv6 requires appropriate address allocation policies. Mobile IPv4 has limitations.
Document for wireless industry why Internet transport protocols are the way they are including congestion issues. Get the requirements from the wireless standards bodies. Get them involved in the TSV WG. Investigate how transport works in face of non-congestive loss.
QoS for mobile hosts need to investigate last-hop (e.g. wireless) QoS issues. Investigate path QoS support when path is mixture of wired & wireless.
Need research on applications mobility, BOF on IP over Bluetooth (body area network), evaluate TCP/IP stack performance in WAP like environment, investigate inter-domain AAA, evaluate/recommend proxy architectures, investigate tokenized protocol encoding.
The archive of the mailing list is at http://www.research.att.com/lists/ietf-trace, also see http://www.es.net/pub/internet-drafts/draft-bellovin-itrace-00.txt
Trying to address the denial of service attacks to be able to see where the attack is coming from. A proposed tool is the ICMP traceback which has some other useful properties (e.g. path characterization). Want to traceback packets coming at you. Assume that one is seeing a lot of traffic, that one does not want to do harm under most conditions, that will not make things worse (another exploitable tool). With low probability (1/20K) routers should send an ICMP traceback message to destination along with the triggering packet. Traceback pkts contain forward and backward router links, the traced packet and an authentication field. The TTL is always set to 255 by sending system thus providing a distance to recipient. Link field comments fileds always in ":forward order" even on bwd links. Subfields: interface name, src/dest addr (if available) for this hop, linking blob, preferably formed from source/dest MAC addresses. Operation end sys collects ITRACE pkts, optionally select only those for Interesting pkts, link fields used to match up routers along the path (for DDOS attacks this will help id structure). Authntication: musy guard against attacker sending spoofed ITRACE packets. Ideal would vbe per-message digital signature but too expensive today, choices are null auth, clear text text strings
Null means do we really need it? The TTL definition provides an unspoofable minimium distance, might work. Cleat text random string, include per-output interface authenticator string in each pkt. Strings last for several minutes. Possible version <interfacename, NTP time> digitally signed, traceback pkts would also include digitally signed list of last several strings. HMAC short lived secret used to generate HMAC of traceback packet, after the sectret has expired, digitally signed, timestamped copy is added to list of previous secrets, in other words you need to see two ITRACE packets from each router to validate.
Even if signed there are PKI issues, how does the recipient validate the signature, taht is who has the right to sign a message from a given router. Iddeally need PKI based on assigned IP addresses.
Related work by Stephen Savage encodes the path information in packets IP ID field, see http://www.cs.washington.edu/homes/savage/traceback.html
Major open issues: does this help enough with DoS attacks? Will ISPs permit a mechanism that exposes so much of their net structure? Are there privacy implications? What authentication mechanism is best? What should the PKI look like? What if ISP's block ICMPs?
Marcus Leitch has done an implementation. It uses code that might go in routers someday and passive monitoring. It is 2500 lines of code in Linux.
Ron Roberts (of Stanford campus networking) and I discussed the campus IEEE 802.11 wireless pilot. They are trying out Lucent and Apple repeaters/access points. They inter-work. The cost is much higher (e.g. ~$1K with PCMCIA card) from Lucent. Gates building folks have set up the whole building and required about 16 repeaters. Ron is thinking of a subnet per area on campus, so there is not a change of subnet as one moves within a building. One of the big drivers is to provision conference rooms and for visitors to campus conferences. they have not addressed whether they will loan PCMCIA cards to visitors. Addresses will be given out as roaming DHCP addresses to machines that have been registered in their network database (Netdb). Gates has an authentication scheme so before one get an address one has to enter a userid and password.
I loaned one of the WaveLan PCMCIA cards at the IETF. There was a deposit of $280AUD (AUD/USD ~ 0.6) for which they accepted a credit card, and if the card was not returned by the end of the conference then you bought it. The card uses one PCMCIA slot and protrudes about 1 inch beyond the edge of the laptop. It has 2 unlabelled LEDs. It does have a printed user guide but they did not give these out. Configuring the card for NT required expert help. It seemed easy to configure for W9x, Linux and Macs. The drivers for all except NT were available on the Web, but for NT a CD was required. Once configured the laptop had to be rebooted and then it worked fine. I tried to set the WaveLan driver up (i.e. played with the power saving features) so that the machine could be suspended but was not successful. This was a nuisance when one wanted to move between conference rooms since either the laptop had to be kept open and powered up or it had to be rebooted. It was reported to Lucent as a problem. There is an encryption option for the driver. The conference center required 9 access point devices to provide coverage. The handover between access points appeared to be transparent, though it is said Lucent does this handover with a proprietary techniques which means it may only work smoothly when the access points and PCMCIA cards are all from Lucent. Typical coverage is 50-100 meters for 11Mbps.The devices will reduce their performance automatically in order to extend the signal distance, and can go out to a few hundred meters at ~1 Mbps.
NIMI appears to be re-invigorated and active once again. I talked with Anthony Adams of PSC and he pointed me to a new URL that provides access to the data (see http://www.ncne.nlanr.net/nimi/). I tried accessing the data for SLAC without success. Anthony says the data should be there, and we agreed to get together to see how to access it, and also to see how we would install PingER into NIMI and how we would get the data back.
The folks at RIPE were very appreciative of the comparison of RIPE & Surveyor results, and offered to buy me a drink.
I talked to Matt Zerkauskas of ANS/Internet 2, and Paul Love Co-chair of the Internet 2/NLANR Joint Techs. Paul invited us (Warren or me) to make a presentation at the Joint Techs meeting in Minneapolis (at U Minnesota) May 15-19; most likely talk times are the morning of may 16 or may 17. There will also be presentations from Vern Paxson and Matt Mathis (heady company). He would like us to focus on providing an overview of what is going on in monitoring, providing some comparisons of the methods, comparing I2, ESnet and commercial performance. The audience is mainly technical folks from Internet 2 ISP, GigaPoP operators etc. He expects there to be a couple of hundred attendees.
One impressive unintentional demo occurred when Gopal Domelly from Cisco dropped his Dell laptop from about 4'6" onto a marble floor. After pushing various plastic parts back into place, he proceeded to successfully reboot it with no apparent damage. Unfortunately he would not agree to repeat the demonstration for interested sightseers to photograph for posterity.
|Introduction||Wide Area Fault Detection by Monitoring Aggregated Traffic||Assessing Internet performance using SYN based measurements||RIPE vs Surveyor||Real Time measurement of Long Range Dependence in ATM Networks|
|Design Principles for Accurate Passive Measurement||Network Performance Visualization: Insight through Animation||Detection of Change in 1 Way Delay||Experiences with NIMI||Assessment of Accounting Meters|
|Internet Performance Monitoring for HENP||Measurement of VoIP Traffic||FlowBoy||The Traffic Matrix||Analysis of Internet Delay Times|
|Statistics and Measurements||Performance Inference Engine||SingAREN Measurements||Felix Project||Discussions|
The First Passive and Active Monitoring (PAM) workshop was held at the Hamilton Novotel, Waikato, New Zealand on April 3-4, 2000 following the IETF in Adelaide Australia. There were about 70 attendees, 40 from overseas. Probably about 50% of the attendees also attended the IETF. The connectivity was via 2 * 6 Mbps DSL lines from Telcom NZ. There were about 4 PCs and about a dozen laptop hookups. There was a also a WaveLAN wireless setup.
Most of the talks were on active rather than active measurements. This is partially since ISPs won't let one put sniffers/passive monitors on their network. Active measurements work well at the edges of the network. The two methods are complementary. Most phone measurements have been passive.
The meeting was a great success, there was a question of when/if to hold another meeting. Combining this one with the IETF was very successful it enabled a contingent of providers to attend. It was agreed in principal it would be good to have a meeting. The next IETF in a year's time is in Minneapolis. They asked for volunteers to host the next meeting.
Measurement is a science and standard hardware & software is not good enough. there are capacity (e.g. how can we measure OC192), confidence (we need confidence in the results and in every stage of the process, need to understand the environment in which the measurements are taken, e.g. does spanning introduce problems, is the hub really a passive device or does it introduce variations, are the time stamps correct, does buffering on the card affect the timing, is the timing synchronized with UTC, are there scheduling effects in the OS that affect the timing) and security issues. They are building hardware NIC cards with onboard processors and FPGAs to capture and filter packets. There are issues of how long to keep the archived information and what to keep, keeping an audit trail of how the data was processed (source code, OS rev levels etc., would like to have a log of how the link configurations/routes etc. change) There are security issues that some of the data may contain commercially competitive information and so the link owners may desire privacy, however anonymizing the addresses can lose a lot of important information.
To extract VoIP information one needs long traces to see the start and end of representative data, to be able to see trends need to keep long archival series. For TCP traces he said we need 10 minute traces. To validate self-similarity etc. one may need traces for several months.
Simulation requires large volumes of data and examples of rate events (since rare events may dramatically affect the simulation accuracy).
I presented a talk on the above topic, see: http://www.slac.stanford.edu/grp/scs/net/talk/mon-pam-mar00/ There were several questions:
They have implemented a DBMS as a repository for their CA*Net3 data. They have a lot of tools such as Skitter, OC3Mon, and have dBs for cflowd, mrtg, OC3Mon with nice tools for drilling down through the database. The customer is the NOC and want to allow realtime access to the data, with alerts (e.g. see source host addresses beginning with 240 or greater up to 255 to looks for smurf attacks, and send email warning to network operator). The message is that collection is relatively easy but analysis is more complex.
Net admins are concerned on how the infrastructure is working and the availability of services within the provider network. Users are concerned about infrastructure, connectivity and availability for services around the world. To know outside the network they use ICMP messages collected passively using TCPdump. In particular they look at the ICMP unreachable messages. They add information to the ICMP messages providing the AS, topology etc. (use whois, Internet Routing Registry & DNS). By correlating the ICMP unreachable messages they can identify the bottleneck. They find many events (ICMP unreachable messages) are generated by a single failure point. They are developing tools to enable visualization of what is not visible when one sees a broken link. Forecasting network availability is useful to net admins and users, and bottleneck analysis indicate dangerous regions maybe can provide automatic announcements in future.
NLANR is using animation to help visualize large amounts of data that are needed to characterize network performance. One of the tools they have developed is called Cichlid that allows viewing datasets in 3D as if they were physical objects. Main author is firstname.lastname@example.org. Cichlid is free, is in C and uses OpenGL running on FreeBSD, SGI, Linux and MS windows. It is a client/server architecture. Should only need to write a small piece (~200 lines of C) to interface a new set of data. It produces 2 basic classes of visualization: bar charts which can be stacked; vertex/edge graphs. Tony McGregor demonstrated several examples of animated bar charts of site vs. time vs. RTT. The demo was on a 250MHz cpu, 9GB hard disk,
They are working on an improved user interface (menu driven user interface, instead of command line), save & restore status, new models (beyond bar chart and edge/vertices), Postscript output, internal design to use threads (thinking about this but not committed to yet).
Problems that may come with VoIP include the real-time nature conflicts with TCP algorithms which may introduce extra jitter and pauses, so use UDP for the data path, but UDP has no congestion control, no back-off so could be very bad for TCP on congested networks. The aim is to provide a model for simulation of VoIP traffic. then can investigate the interaction between TCP & UDP. The method is to do passive measurement traces and from these measurements provide data for simulation.
The data comes from Measurement and Operations Analysis Team ( http://moat.nlanr.net/) which has 15 OC3 & OC12 monitors mainly in the US. A 2nd source is a the NZ Internet Exchange which has a 100Mbps Ethernet switch and has 2x10 minute traces daily from Dec 98 to April 99. The data is captured via spanning the switch (believe there is low loss, but there will be some collisions), it is a software based measurement (Linux system) which limits timestamp accuracy.
They look for H.323 (standard for VoIP) which allows inter-operation between different applications. Hunt for VoIP traffic by looking for specific TCP control ports (1503, 1720 used to establish connections), then look for dynamic RTP data & control points (uses even port for data and eve+1 for control). It also often has fixed sized data packets. Pe4rcentage of traffic is low, e.g. 22 conversations from 211 traces on NZIX & approximately 1 MB from 7GB traces from NLANR data. The trace duration needs to be as long as possible (conversations often exceed length of the traces), it may be possible to mode to real-time data collection on a 24x7 basis.
Bytes/second is about 25Kbytes/sec but with large excursions. Control information is a lot lower. Another case was 1700bytes/sec.
For the simulations want to vary TCP & UDP from 0-120% of link capacity. the TCP data formed from TCP NZIX trace logs. Data offset & over-layed if it is required to generate extra load.
The results show that UDP traffic badly affects the TCP performance if the link is saturated (i.e. the UDP does not back off and TCP is badly affected - the available bandwidth for TCP drops off rapidly as UDP traffic increases).
Susmit Patel from Mitre. This was a description of a mathematical technique to deduce performance measures from transactional data.
Martin Horneffer, University of Cologne.
The motivation is help the user understand what performance they might get, how to choose between ISPs, to be able to measure the performance of various Internet access points, and to help ISPs in selling services to customer. First step is to compose a list of Internet hosts by observing (e.g. Netflow, TCPdump, NetraMet) traffic generated and seeing what hosts are actually used. Thus the actual hosts probed varies from hour to hour (the granularity of the composing the list of hosts from NetFlow), thus he does not have historical information on any single pair. He showed a histogram that shows that for Koln, 100 hosts cover > 50% of traffic and 1000 cover 66% of traffic. To get all traffic takes 100,000 of hosts. He uses a list of about 1000 hosts. Having got the list, he then measures the network layer using metrics from IPPM. Ping has problems (e.g. may be treated differently from other traffic), so instead use UDP-echo and TCP SYN so routers/hosts can't tell the difference.
Use TCP SYN mechanism to measure loss & RTT and can close connection with a simple RST. Need to watch out for loss of syn-ack coming back since will add a timeout (typically, but not necessarily 3 seconds) before re-transmit. Loss of third packet may cause a re-transmit of syn-ack. Can use heuristics to remove the re-transmits (e.g. by looking at RTT frequency distribution and looking for a 2nd spike at high (say around 3 second) delays).
There have been complaints about TCP SYN "attacks". People compain via sending to well-known email addresses (root, postmaster, abuse, webmaster), or look up DNS-SOA, or lookup in the RIPE-DB, or complain to rector of university. Martin explains to the user the nature of what he is doing after hearing form user. Some users ask to stop monitoring. He tried putting information into the packet, but most people did not look at. He set up a web page and a TXT entry in DNS (neither of which helped), set up a dedicated IP address with a descriptive name (measurement-for-thesis.rrz.uni-koeln.de), put a virtual web server on the measurement host, the combination of the last pair is reasonably effective. UDP-echo is worse in terms of complaints than TCP-syn. In over a year he has received about 25 complaints, of these 4 were related to synack, the others were UDP-echo. Since they dynamically select the remote sites based on passively measured tcp flow information, it is not realistic to notify of the order of 1000 sites in advance that they may see a lot of syn/acks.
He plots the predicted TCP thruput (based on formula from Padhye et. al see Proc. SIGCOMM. ACM , Sept 1998) cumulative frequency (cumulative across multiple connection pairs) for various providers.
His method is unbiased & objective since the list of hosts is obtained by automatic scripts, it scales to many hosts, it requires low bandwidth, it has good accuracy, it provides clear results for application performance. It requires resources - user base, router (netwflow), a host machine, a dedicated address, database entries (DNS RIPE), permanent maintenance - DONT TRY THIS AT HOME.
This had to do with understanding and measuring clock skew between sites.
There is an extensive list of flow management tools. However, they don't interwork, new software has to be introduced when flows are exported by a vendor in a different format or there is a new flow type, and the learn, install, configure cycle can be quite expensive. So this applies OO technology to flow management.
Talk was given by Heng Seng Cheng.
SingAREN is connected via 14Mbps to STARTAP (& thus vBNS, Abilene, CA*Net2) also connected to Japan, Taiwan, Korea, Malaysia and of course provides connectivity for Universities etc. in Singapore.
The tools need to notify in event of link/node failure or performance degradation and to provide netywork performance from an end users perspective. Looked for available tools.
They use Surveyor, Skitter, ICMP with MRTG, and Speed Meter (from Hungarian MirrorNet company).
RTT to LA has a minimum of around 200msec, 72 msec to Korea/APAN.
Speed Meter goal is to help web site owners determine how their site performs in different parts of the world. See http://www.tracert.com/. They install agents in several locations in the world. Agent downloads a web page and measures different parameters of the page and reports to the central server. Measurement results reflect end-to-end performance seen by users, and results are sent to subscribers by email.
Satellite links are useful to connect SingAREN to neighboring countries such as Vietnam, Phillipines, Thailand, Burma where it will be many years before fiber is installed. BER is typically 10^-6 (i.e. much higher than fiber) and long RTT (500msec.) and so the link is likely to dominate the overall performance of the IP measurements. The ITU has specified some QoS performance objectives for ATM over satellite, e.g. cell loss better than 7.5*10^-6.
Henk Uijterwaal, Matt Zekauskas, Sunil Kalidindi.
See RFCs 2330, 2679, 2680 for description of metrics for one-way delay and packet losses. IETF requires two independent implementations that will give the same results. The projects are Surveyor and RIPE. No direct comparisons is possible so have to look at distributions.
Both use GPS with UDP packets. Surveyor uses 40 bytes with 2/sec, RIPE is 100bytes and 3/minute. Both use Poisson scheduling. Both also do traceroutes, measurements are centrally managed, results are available on the web. Surveyor uses BSDI Unix, RIPE uses FreeBSD use different GPS devices (Surveyor uses True Time PC[I]-SG "bus-level" card which is expensive ~$2K). There rate71 Surveyors, 2100 paths; RIPE has 43 machines (plan to double) mainly at RIPE members i.e. European ISPs. CERN, SLAC, RIPE NCC & Advanced N&S) have both RIPE & Surveyor machines. Unix time is typically only accurate to 10msec, BSD has a sub clock at 1.2MHz which gives usec accuracy but clocks on machines run independently, so use GPS to provide synchronization and get internal clocks synchronized to a few usec.
He showed a plot of delays of RIPE to Advanced and they track one another closely, then showed the percentiles and the distributions for 2.5%, median, and 97.5% look similar. He showed our plots of the statistics and then showed the drift required. Henk looked at understanding the drifts. He varied the packet length from 40, 200, 500, 1500 byte packets, and plotted delay vs. length and gets a linear variation. a=8.09+-0.10)10^-4byte/ns, TA (8.47+-)* 10^-3. He can explain 0.14msec by packet difference. Further investigation needed for remaining 0.06msec. They will look at the effects of different sampling frequencies.
A possibility would be to look at two receivers of the same data this may show some instrumental differences.
Talk will be available on Monday at http://www.ripe.net/test-traffic
Vern Paxson, Andrew Adams & Matt Mathis
NIMI is a command & control infrastructure for Internet measurements, ISPs and others volunteer resources and users (researchers) run experiments on the shared resources (requires delegation).
Configuration is done by the CPOC which controls a set of probes. The CPOCS have Measurement Control (MC) and Data Acquisition Control (DAC). The key is flexible policy control with heterogeneous admin with ACLs and delegation. there was an aearly focus on security design. There are 35 nimids with 2 CPOCS, 8 user/studies (e.g. Mark Allman using for capacity measurements).
Lessons learned include remote error handling, lack of measurement grouping, key dsitribution is hard, need fine grained state persistence, data integrity under unexpected circumstances ... Heterogeneity lesson s: tools required privileges, modification to system config, secure admin access ... Large distributed SW lessons: clocks, DNS flakiness, subtle APIs, exhausting resources - made harder by 4 different OSs.
Functional extensibility: experience has shown that adding "user" code is crucial. Basic tension between extensibilty and avoiding unexpected behaviors boils down to resource management and security. Resources include: cpu, memory, disk space, I/O activity, network activity, packet filters (channels), TCP & UDP ports, response ID (name) space (e.g. ports within ICMP errors in particular ones that run at high rate with ICMP errors). Threat models: subverting the platform provides a stepping stone well connected for DoS (Denial of Service), sniffing, altering measurement software. Attacks from within NIMI: measurement tools deliver attack packets, collect private information. Perturb other people's measurements: forge measurement packets, excessive resource consumption.
Trust models: grant priviledged access to NIMI admins, local admin perfoms priv. actions, widespread Internet reliance on trustworthiness by eminent authority. NIMI extensibility requires an iron-clad trust model otherwisre all providers need to trust all users, perhaps an open question.
New tools are installed out of band not using NIMI (scp, tar etc.). They intend to deploy tools by allowing nimid to get the tool from MC, then hope to go to CPOC>MC>nimid which should be more scalable.
Protection from corrupt tools with code validation (scan imported code, halting hard), file system sandboxing (the entire nimi probe, chroot to a private nimi tree, chroot to each measurement tree). Network sandboxing has a granularity problems: possible approaches: kernel mods, linking against trusted library, enforcement daemon, "safe" languages. Proxying all network requests gives sender latency problems, need receiver timestamp in driver (but latency problems for some metrics), can be subverted but solves packet filter woes. Tracking al resource usage can help but find out later.
Safe languages examples are Java, Python, Perl taint. they have restricted programming semantics, restricted semantics for packet I/O, have complete control over resources. It is hard but a strong solution, but is it really needed?
Looking at safe language with runtime control on all resources, restricted network I/O.
Martin van den Nieuwelaar
AUCS is a tier 1 transit provider in the Netherlands that is looking to expand globally. Recently merged with Infonet. They have about 60 customers in 16 countries in Europe with link speeds of 155Mbps, and 3 *OC3s to New York.
Up until late last year monitoring was done by SNMP queries. Looked at other tools and came up with NetFlow exports from Cisco routers. Export is done by UDP so may not get through, there is a counter tellilng whether they get through but can't guarantee. NetFlow exports allows one to tell what portion of traffic coming in one interface goes out another interface. He has compared with NetFlow with SNMP stats and got agreement to 5-10%. Has noticed that some flows last longer than the Cisco software reports. Products to analyze the stats include a Cisco product and CFlowd. CFlowd is free and is very good at integrating statistics.
They have about half the POPs are instrumented with collectors. They are collecting about 8Gbytes/day from each router and have about 150 routers. So they do a lot of aggregation (e.g. into 5 minute intervals).
A problem is how ifindex numbers are assigned depending on whether board is added in hot mode or maintenance mode. It makes it hard to track the interfaces. They provide reports on number of flows seen per hour for each link. Running NetFlow adds about a 15% load on the routers.
Bruce Siegell from Telcordia gave the talk.
Research and prototyping of novel methods for monitoring network health, based on analysis of loss and RTT from sparse mode monitoring.
Felix monitors make one-way delay measurements. Rely on clocks on hosts so must deal with clock synchronization. They model the network by assuming all delays can be associated with links (bundle router delays into links). They assume links are unidirectional. Delays is sum of delays on links in the path. Path dlay equal topology matrix time link delays plus an error term. Discoverable topology is only for links traversed. Goal is to produce a map of the network. they create a matrix of what paths share something in common, i.e. paths vs edges. Look at delay peaks and see how the correlate across multiple pairs to see if they share a component (edge). Also minimize path lengths (number of edges in a path) with constraints that can't move the source and destinations.
They investigated clock drifts. They look at the minimum delays shift lower envelopes so centered around zero, then do for the reverse path, then flip one of the graphs (since drift extends (or contracts) the delay in different directions for the 2 two paths) and they lie on top of one another, so can then straighten by adjusting clock drift.
Matthew Roughan, Darryl Veitch, Jennifer Yates, Martin Ahsberg, Hans Elgelid, Maurice Castor, Mick Dwyer & Patrice Abry - University of Melbourne.
Want to do real time measurement to provide immediate access, reduce memory needs. Recognize traffic is fractal, i.e. there are invariance relationships between scales (self-similarity), lack of a characteristic time scale, radical burstiness, strong dependence on past (no exponential drop off), difficult statistics; properties. Implications for networks include lower utilization at a given QoS, buffer insensitivity, larger buffers don't save us.
Georg Carle, Jens Tieman and Tanja Zseby - GMD Focus
The motivation is to do fair charging which requires inexpensive metering of used resources. The requirements depend on QoS provisioning techniques, charging scheme, expected traffic characteristics. Need a flexible testbed. There are 4 major accounting meters: NetraMet (free), IPAC Linix IP accounting, NetFlow (from Cisco only), and NARUS probe.
Stele Martin <email@example.com>, Anthony MacGregor & John D. Cleary, U of Waikato
Want to break out web page delay into various components. need a model to describe the delays, want to use passive measurements. Delay consists of physical network delay, the network/queuing delay, client/server processing delay. Later want to look at loss.
Estimate the wire time from the measurements for the distribution of time. The network queuing delay is the load dependent and take it as being the difference in the minimum and the man of the distribution. The difference in the mean of the data time and the mean of the synack time gives the server time. Total delay = 2 * physical + 2 * net/queuing + server.
Total delay = 2*(client phys + server phys) + 2*(client net + server net) + server
Then plot 3 components on triangle with 3 coordinates (% server, % net, % physical) and the clustering shows where the delay components come from.
Future work: make more use of different RTT classes; analysis of persistent HTTP, non HTTP; packet loss & timeouts; statistical confidence/robustness; processing of partial sessions, validation on ideal environment.
Darryl Veitch, Jan-Anders Backar, Jens Wall, Jennifer Yates, Matthew Roughan.
I talked to David Moore of CAIDA/SDSC and Matt Zerkauskas of ANS/Surveyor. Matt is optimisitic about getting a Surveyor installed at SDSC in the near future.
I talked to Daniel Karrenberg of RIPE concerning automated use of the NIKHEF traceroute with the RIPE AS database. He encouraged such use, and expect it to be in the noise at the level I proposed (about 70 hosts each tracerouted once an hour).
I discussed with Matt Kerkauskas what should be the content of out presentation to the Internet 2 folks in May. He said they want to have an overview of the active measurement programs, something on long term trends, IPv6 (especially if we have some new data).
Andrew Adams (of PSC) and I looked at how we would get started on putting PingER into NIMI. We installed software, set up and propagated keys, started up the new data analysis client, and reviewed how to run the measurement client.
We were asked if SLAC would be willing to host the next PAM meeting in 2001.
|Matt Zekauskas||ANS||Comparing Surveyor and PingER measurements|
|Matt Mathis||PSC||TCP throughput and dynamics|
|Daniel Karrenberg||RIPE||Comparing RIPE and PingER measurements|
|Andrew Adams||PSC||Installing NIMI|
|Vern Paxson||LBNL||NIMI Infrastructure|
|Paul Love||Internet 2||Presentation at next Internet Joint Techs meeting|
|Tony MacGregor||University of Waikato and CAIDA||AMP and PingER measurements|
|Thursday, March 23||Leave Menlo Park|
|Saturday, March 25||Arrive Adelaide|
|Sunday, March 26||IETF Reception|
|Monday, March 27 - Thursday, March 30||Attend 47th IETF meeting|
|Friday, March 31||Fly Adelaide to Rotorua|
|Saturday, April 1||Weekend|
|Sunday, April 2||Attend PAM reception|
|Monday, April 3/td>||PAM meeting|
|Tuesday April 4||PAM meeting and fly Aukland to San Francisco|