Internet 2 Offices, Ann Arbor Michigan
Author: Les Cottrell. Created: October 26, 2001
Sixteen attendees in person attendance (14 used their laptops), plus about 3 sites by video, slides via web. Used VRVS. Internet 2 is supporting VRVS deployment. The conference room had wireless and wired access. Internet 2 has over 300 members. Look at how to educate university CIOs etc. in the aims for HEP and that HEP is a bell-weather application for the Internet. There are Internet 2 staff focused on special areas including medicine. Also work with specific faculty members in areas of focus, earthquakes, HEP ... (Charles (Liu?) is legman). Hope can identify funding opportunities.
Many of the talks are available at http://www.usatlas.bnl.gov/computing/mgmt/lhccp/henpnet/FirstMeeting/FirstMeetingAgenda.html
Large amounts of data, 500TB 2001, PB by 2002, TB by 2012. Collaborations are critical. Network backbones are advancing rapidly. Data grids rely on seamless transparent operation of networks. Hierarchy of tiers of sites. LHC requires excellent transatlantic BW (OC12 2002, then double / year, OC192 by 2006). Current experiments successfully use network to distribute files and are pushing network requirements. US Internet traffic growth rates increased from2.8 times in 1996 to 4 times in 2001. GriPHYN/iVDGL is an example of a grid being built worldwide. Abilene?QWest partnership extended to 2002. In Europe GEANT connects European countries at 10Gbps. Japan has super SCinet starting in 2002, 10Gb core plus connections. StarLight will give high speed connectivity from US to Europe. MRTG plots show usage of bandwidth as soon as it installed, also show the programmatic usage of long (days) at a time. Have to work on window sizes & streams to optimize throughput. High throughput requires low loss (may require new TCP variants) getting down to error rates on fibers. Many other challenges: router horsepower, configuration, LANs, servers & client cpu configurations, firewalls, wizard gap. Network related hard problems: query estimation, policy vs capability, security, simulation, metrics of performance & conformance.
UCSD 10GE workshop: 10GE IEEE standards committee is dead set for 1500 byte MTU. Strong statement is that there is a linear relationship between throughput and frame size. It will be hard to compete with the 10GE WAN standard.
This forum is very important to HENP for networkers & physicists to get together to work the problems. HENP depend on networks, bandwidth demand is huge & critical. DoE recognizes these needs and the need to get people together and crystalise problems. ICFA has had a standing committee to address network requirements, it will be revitalized. Need to clearly and compellingly present plans to funding agencies. Current budget being worked on is FY04. Need active participants in working groups such as this one, in ESCC. Need innovative ways to work with vendors. Need to work across international boundaries. Funding will be a patchwork of funding including ESNet, Abilene, DOE/HENP, European agencies etc.
Poor network performance arises from bugs in TCP stack. Psockets library allows multiple streams. Being used in GridFTP, Netscape, GNUtella. Low levels of systemic (i.e. not due to router etc. queuing) packet losses (e.g. for txqueuelen in Linux not being large enough, bad cables) and come in bursts (Bolot). Want to separate systemic drops vs router packet losses.
Use of multiple streams is OK as long as do not overdrive. It is making up for limitations of TCP. The knee of throughput vs streams happens where fill pipe. Multiple streams has similar effect to using large (jumbo) frames.
Interesting paper by D. M. Chiu & Raj. Jain Congestion Avoidance in Computer Networks defines fairness.
Memories are now faster than disk drives and so not the bottleneck, so wavelength connected disks become interesting.
Dantong wants SLAC to join in this. We talked afterwards about how to accomplish this. Looking at differences in Globus/MDS and GMA
Carrriers are not asking next generation router manufacturers for multicast support. I2 has 51 connections (40% at OC12, 15 regional GigaPoPs support 70% of the participants), 201 participants. Trans-Oceanic R&E bandwidth growing rapidly due to new availability of fiber. Exchange points migrating from ATM to GE switches. Current systems can support 160 Lambdas at 10Gbps, i.e. 1.6 Tbits/s, cost of fiber only 2% (cross-country dark fiber pair is about $19M today), cost per wavelength is $3-5M ($30M for first lambda, so to use well need to light a few lambdas), cost for 1.6Tbits/s is about $500M. Bandwidth exceeds Moore's law in growth. Optical switches not ready for prime time and cost about $5M each. Now running at 2.5Gbps want to move to 10Gbps, then exploit adding extra lambdas. Moving to lambdas uses unprotected services. Big hope is to get the optical transmission costs down to fiber costs. There is a glut of older, less high speed equipment available, which is slowing down the development of new higher speed equipment. They are pushing IPv6, they recognize ESnet's contribution. Need IPv6 to maintain network transparency. Need to improve resiliency to take account of unprotected services, want reconfiguration in 100s of msec. Want to deploy 10Gbps lambda at SC2002 in Baltimore. Current lambdas are not pure optical, still a lot of SONET signaling. 10GE standard has 2 standards a WAN PHY and a LAN PHY. Want to understand how the WAN PHY impacts all optical. Due to changes in financial environment the QWest deployment of DWDM is not as widespread as SONET.
Surveyor infrastructure support is moving to Wisconsin (Paul Barford). NG Internet2 will have performance requirements, packet reflectors.
Want to enable user to get high performance without needing wizards. Want to work with application communities including HENP & medicine, with applications such as FTP & video conferencing. Want to leverage existing measurement infrastructures by bringing together the community efforts and projects. Need operations information for all nodes along the end to end path. Send experiences (war stories / case studies) in trouble shooting to http://www.internet2.edu/e2epi/. Will need to sanitize to hide private infrastructure, and age as they become out of date. They are willing to provide a writer to assist in documenting. They should be sent to info-E2Epi@internet2.edu. Set up for a community of common interests, such as LAN administrators. Need finger pointing tool and who to contact .
Reviewed and modified the mission, technical objectives and activities. Next meeting January 30th Tempe Arizona coincides with an Internet 2 meeting. Would be good to have pointers to regular iperf measurements so information can be shared. There are regular iperf measurement activities at GSFC, SLAC, UMich (http://atgrid.physics.lsa.umich.edu/~cricket/cricket/grapher.cgi) and CERN.