CENIC 2002 Conference, San Diego, May 5-7 

Author: Les Cottrell. Created: May 6

XIWT | IEPM | PingER | Tutorial
Contents
Attendees | Introduction | Discussions

Attendees: 

There were about 238 registered attendees. they also had web casting with the Internet 2 meeting. This was the 6th CENIC meeting. CENIC stands for Corporation for Education Network Initiative in California. 

Introduction

Want to provide connectivity for California. They have defined 3 layers for the network: 

  1. An experimental development network (XD) for bleeding edge services, to be used by network researchers.. 
  2. A  high performance research (HPR) network for leading edge services for large applications, stable network but still leading edge. 
  3. The digital California (DC) Network high reliability for high quality services to be used for K-12 California Research/education users.

Pacific LighRail (PLR) is part of XD. want to expand to National LightRail (NLR). They are working with the Pacific Northwest, Canaire, CUDI from Mexico. They had separate tracks at the conference on each layer. I spoke at the HPR track.

Canaire - Bill St Arnaud, Sr. Dir. of Network Projects, CANAIRE

Talked about CA-Net 4 the next generation Canaire for open Grid services. Grid computing is about managing computer services across a distributed set of sites. Similar problem for managing and controlling an optical network (wavelengths), e.g. want to let customer set up/tear down circuits on the fly, i.e. just like controlling own fiber or similar to what the mini-computer or PCs did in enabling researcher to control own destiny.  Looking at high end users who can use a 1Gbps link, and also at web services. In this sense networks are like grids, they have inter-domain challenges and want to share resources in different management domains. The advantageg of web services is taht it eliminates the protocol standards process, functionality of a service is defined in WSDL as opposed to being locked into a standard. WSDL & SOAP make it easy to define new services. Canairie has contracted to Canadian carriers to provide OC192 links (10Gbps), so the challenges is to join up these carrier services. With the optical cross-connects (in preference to routers in most cases, routers only used for peering with other networks) the end points (e.g. universities on Canaire) have a direct optical wavelength connection. Believes circuits are not inherently bad, got bad rap from carrier business models and ATM. Important thing the Internet was not so much connectionless did but the recognition of  the importance of end to end and user control, so could develop applications without asking for permission.

Questions:

Distributed Terascale - Fran Berman director SDSC & NPACI

Talked about SDSC & NPACI and then the Teragrid. Teragrid SDSC/ANL/Caltech/UIUC and will link in more sites later. 13.6 TOps, 600 TB of disk, 40Bgps network to provide a new paradigm for data-orineted computing. Critical for disaster response, genomics, environmental modeling. Software will be Linux and Globus. Want to enable new science. Has to be extensible.  Assume heterogeneity with a grid hierarchy. Applications are virtual observatory, want to distributed joins across many surveys. Biomedical Informatics Research Network (BIRN).

Internet 2 & the Quilt - Steve Corbato, director backbone network infrastructure (UCAID)

Abilene is a UCAID project in partnership with QWest, Cisco, Juniper, Nortel (SONET), Indiana (NOC), ITECs in N. Carolina & Ohio. May '02 status is OC48 SONET backbone with 53 GPops, 4 OC-48c connections, 1GE trial, 23 will connect via at least OC12c, number of ATM connections decreasing. 215 participants all 50 states, DC, PR. 15 regional Gpops support 70% of participants. 50 sponsored participants, 23 state education networks (SEGPs). California is biggest participant state. 

Level 3 fiber from UCSD to LA, with rings in SD area. Tayor communications will do build out. CENIC had colo facilities in same place as Teragrid facilities.  

Transoceanic R&E bandwidths growing, e.g. 5Gbps between Eu and NYC for GEANT. Key internhational exchange points facilitated by I2 membership andthe US scientific community. Startap & starlight (CHI GE), AMPATH from Miami to S. America (OC3-c . OC12), Pacific Wave (Seattle), CUDI with CENIC and UT at El Paso, CA*Net3/4/ Seattle, Chicago, NY.

Demoed packetized raw HDTV single 1.5Gbps UDP flow of 1.5Gbps. 18 hours no packets lost. Abilene QWest contract until 2006.  Looking at IPv6 will run natively concurrent with IPv4.  Surveyor work will continue. Fall 2002 (last week October) Abilene will have 10Gbps SEA, SNV, LA, KC, CHI ... NY.  

Quilt is a coalition of advanced regional networks with 18GPops with SURAand EduCause includes CENIC and PN/W led by WEndy Huntoon, UCAID projects, includes E2E measurement. 

National LightRail (NLR), a fiber optic network- Ron Johnson, U of Wash

Three layers of networking: research, experimental and operational. CENIC/ONI for California,  GigaPop(PN/W-GP) for Pacific Northwest. Pacific Wave next generation national and international R&E net interconnection & exchange located in Seattle. Wants 2 major international exchange points for R&E nets on W. coast (Willshire and Pacific Wave). 

Pacific LightRail (PLR led by CENIC + PN/W-GP) exp & ultra high performance nets and metapop exchanges/testbeds in the West. PLR is a wavelength infrastructure with owned fibers for the west costs (Seattle to SD), with links East to Denver & thence to fellow travelers in Chicago (I-wire). 

NLR is a discussion growing out of PLR, I-wires, DTF and UCAIDs exp & res. net efforts. Common needs for wave and faciliyies based infrastructure, connect & interconnection needs (many in the same places), common apps/content and projects, funding opportunities. One view NLR is a lightweight, but highly coordinated collaboration to provision, acquire and or operate optical networking assets and services, leverage collective buying power and experience of the consortium (ANL, CENIC, P/NW, UCAID) from national to the metro scales (esp. backhaul, laterals, PoP access), serves as optical infrastructure substrate for e-science projects proposing to a diverse array of funding agencies. Hope extra Gbps services between W and Chicago in next year, will be upgraded Abilene (10Gbps) in the fall.

National and regional K-20 initiatives - Louis Fox, Vice Provost, U of Washington

National initiatives outside research universities. Pacific Lighthouse is a collaboration between PLR and CENIC, content providers and school partners, an investigation into the pedagogical uses of digital libraries, an on-demand media repository built upon MediaWeb technology, I2 middleware and hi-perf net.

21st Networking & applications - Larry Smarr, California Institute for tele-comm & IT

We are only at the beginning of the Internet. There are big changes to come in the near future, including: developing a parallel wavelength network infrastructure (lambda grid),  scalable distributed computer power, storage of data everywhere, broadband mass market (US has 4% homes with broadband, S. Korea has 80%). 

Optical fiber bps doubles at 9 months, disk at 12 months, disks at 18 months. This indicates that networking will be a major driver in future developments. This will awaken a whole new range of applications.

California does not have a funded plan for an ONI (Illinois has I-wire, Canada has Canaire CA*Net4). Europe already has a 10GE backbone.  

There are plenty of applications including HEP, neuro & earth sciences (each object is 3D and GB) data are generated & stored in distributed archives need federated repository. Requirements: computing > PC clusters; communications . dedicated lambda; data ...

NIH is funding brain imaging federated repository. Plans to expand to other organs and many laboratories.

Mobile hi-perf Internet the next step for CENIC. Wireless mobile connections will overtake fixed Internet connections, building from negligible in 1999. Will bring big new needs, for example video streams from remotely located cameras.  

Emergency response scenario. Talking of ID tags with built in wireless, GPS etc. 

Overview of the CalREN-HPR Network - Gregg Scott, CENIC

Building 3 networks being built: XD, HPR and DC (XD unfunded) with common facilities hdw/swr & staff. California has been in leadership with BARRnet/SURFnet > CalREN2 > CalREN - HPR will hook into Abilene NG (will be first net to connect to Abilene at 10 Gbps). Bottom line is new performance demands (low latency), increased inter-site research & collaboration, need to break the cost-per-bit & pipe capacity charging models, opportunities to apply higher Ed IP LAN models to WAN applications. Optics enables differentiated services.  The transport components include: backbone which is a mix of waves & dark fiber & DWDM equipment, metro hub sites interconnection dark fiber & WDM equipment, last mile campus & carrier last mile (dark fiber), WDM equipment - hub site to campus.  PLR is a 10Gbps backbone (POR, SEA. DEN. SFe, LA, SNV) Then GPops connect to bacbone and sites connect to GPops. Stanford connects to PAIX which connects to SNV.  Offering 4 folkd increase of bandwidth compared to existing CalREN2. At campus have 2 routers one for HPR one for DC. S. Cal needs op amps on their rings which adds cost and maintenance complexity. Will deliver 4 * 10Gbps waves from I-wire to S. Cal ring. S. Cal ring will have 17 waves.  Some waves get dropped off around the ring. Boxes being purchased will support 40 waves. These waves will be live by the end of CY02. 

Running into problems with suppliers (e.g. going chapter 11 etc.). Have legal folks working to get protection.  Multiple carriers will add to complexity, but prices are good. Have also got to address how to peer between HPC and XD and HPR and DC (for reliability). Connections to campuses will be multi GE since campuses can't afford the GE interafces

Interplanetary Internet and HPR: common implementation issues - Scott Burleigh, JPL <scott.burleigh@jpl.nasa.gov>

This was a discussion of the last 45 million mile problem. Primary problem is extreme delays, e.g. RTT to Mars is 8 minutes to 40 minutes, and at times cannot talk at all for an extended period since on other side of Sun or other side of Mars. Only have solar power and may be much further from sun, so power drain is critical. 

Gaps in infrastructure, e.g. no assurance of end-to-end connectivity at the time observations are taken (no fiber, no phone no continuous satellite communication). Finland is a case in point but people on snowmobiles travel constantly among communities, so install wireless-equipped computers on snowmobiles & use store & forward protocols. "Bundles" are stored in hard disk while snow-mobiles are en route, forwarded when snowmobiles come within range of base stations (villages). Basically multi-hop email. 

A second case in point is Devon island in the Arctic (similar environment temperature to Mars) so pass information via LEOS (as they pass over) to Inter-Planetary society at Pasadena.

Experiences and results from implementing the QBone Scavenger Service - Les Cottrell

See www.slac.stanford.edu/grp/scs/talk/cenic-may02.html

Lab perspective of Importance of HPR - Peter O'Neil, NCAR

CPU improves 60% / year, optical bandwidth has surpassed server performance. Problems posed by high bandwidth links: large badwidth delay products (TCP becomes oscillatory), MTU sizes. High performance flows slower than line rates, delays increase even with higher bandwidth, TCP tuning issues are non-critical. Poorly conceived stacks, rt/sw buffer queues are inadequate, slow start & AIMD algorithm, reduce wizard gap, add instrumentation to TCP stack.  Make it easy for non-experts to achieve higher performance. Enhance TCP performance. Alpha 1.2 released May 3 '02. Freely available software distribution (www.web100.org/download) IEEE standard process for MIB to benefit all. Attention turning to working with OS vendors to incorporate standard enhancements into their stack. Talking to HP, Sun. 

Net100 applies Web100 software to existing problems such as bulk transfers over high performance links. Goal is develop a network aware OS. 

Future Revolution Optical Communications & Networking - Alan Wilner, USC

Actual title "Reconfigurable Multiple-Wavelength Optical Systems and Networks". It was like a primer on optical communications. 1996 broke Tbps over a fiber barrier. Demand is still growing for more infrastructure. "Things grew by a factor of 1000 and just recently contracted by factor 10, so overall it is great." "You can transmit infinite bandwidth over 0 distance." OC-768 = 40Gbps, OC-3072=160Gbps. The sweet spot in terms of optical difficulty keeps moving, e.g. 1996 OC48, then 1998/9 OC192. Chromatic dispersion increases with square of speed, so each step in speed by factor 4 results in factor 16 in chromatic dispersion. 

Big steps: WDM, erbium doped optical amps, dispersion management; non statis & reconfigurable nets; gain & power equalization; dynamic dispersion compensation; polarization mode dispersion; sustaining the bandwidth growth.

Power-loss dB/km is a minimum for 25Tbits around 1550 nm. v=c/u (u= refraction index).  Spreading is ps/nm/km = f(distance, (bit rate)^2).  So at high speeds run  into problems at 10Gbps and 100km.  So use multi-wavelengths at lower speeds and multiplex.  Over 50km need to demux signals into separate wavelengths and the regenerated each wavelength then mux again, very expensive. Optical amp on the other hand is cheaper than a single regen and amps all wavelengths together. An amp needs isolators to remove reflections.  Want to make a fiber with no dispersion.  But index of refractions depends on intensity and one has fewer photon in the wings so get different speeds. Need some chromatic dispersion to keep wavelengths separate so do not interfere with non-linear effects.  So zero dispersion is bad so is big dispersion.Use dispersion compensating fiber so move one channel fatser then reverse so travels slower, so the net dispersionis zero but along the path there is dispersion. Raman amp is a distributed amp that does not require such high power on fiber. In 2001 fiber was being deployed at 200km/s (Mach 3). 

Funny things go on in actual fiber so need to accomodate dynamic changes. With WDM can pick off a single wavelength without affecting other wavelengths. Use self healing fiber rings. Optical cross-connect typically uses MEMS mirror systems. 

Gain & power equalization: Gain of op-amp is not flat, can have an amp that equalizes power, but depends on link loss and this may change. 

Temperature dependent chromatic dispersion is an increased problem at higher bandwidths.

Optical network monitoring of power, wavelengths, optical signal to noise, distortion CD, PMD non-linearities, need to find non-catastrophic problems in optical channels.

Polarization Mode Dispersion (PMD). Fiber deployed in 801 & 90s had a non circular core so get different speeds for different polarizations. Nowadays fiber is more circular.  Still a problem at higher bit rate. Can put polarizations back together with a time delay element.

All optical is the holy grail. Will need optical buffering, optical labeling of packets, how to look up 100,000 addresses for routing ...

He believes another revolution is coming. At 40Gbps electronics is tough. There is only 25Gbps available in fiber. In experimental now within factor of 10 of the fundamental limit. Can't throw out 300Mkm of fiber in ground.  0.05bits/Hz is spectrally very inefficient. In past big steps made by different size fibers and mode (multi to single mode), coherent detection, op-amps, DWDM ...

Networking the engineering, IT, humanities & social sciences - Ruzena Bajcsy, prof of EE & CS at UCB

 


[ Feedback ]