NTONC Research Partner Meeting
Rough Notes by Les Cottrell
The NTON is an all-optical open testbed for demonstrating high bandwidth applications and emerging wave division multiplexing (WDM) applications. The Consortium is a partnership of commercial, educational and government organizations (including NORTEL, LLNL, Sprint, Pac*Bell, Hughes ...) doing research in "advanced optical networks". there is a Testbed configuration (400 km optical ring at OC48 (622 Mbps)) around the Bay Area. One of the major next steps is to get high bandwidth applications.
The National Transparent Optical Network Consortium (NTONC) Research Partner Meeting was held at LLNL on 7/30/97. There were 3 goals:
Summary of Application Network - Bill Lennon
Showcase high bandwidth applications and to accelerate emerging technology deployment. Has optical ring backbone around the Bay Area with 4 main backbone sites at UCB, Sprint Burlingame, LLNL & Pac*Bell in East Bay. Have 4 wavelengths, 2 being used for SONET. Includes UCB, Oakland, Vallejo, Walnut Creek,, San Ramon, LLNL, SJO, Burlingame, SFO. Includes UCSF, NASA. Includes Nortel, Sprint, Pac Bell. Stanford may/will get connectivity via the Silicon Valley Test Track (SVTT) an OC3 backbone on the Peninsula extending up to SFO. Does not have any IP routers yet, want to bring in a router vendor.
M/M Distribute Lab & Vertical Notebook
Adaptive management of heterogeneous multipoint collaborative teleconferencing environemnts.
Interval QoS Routing from multipoint conferencing
Multipoint collaborative management of remote satellite tracking antennas & orbital on-board measurements.
NTON management notebook. Capture network operators experience at different levels: optical, SONET, ATM etc.
Scalable 100 Gflop Cluster -- Robert Clay, LLNL
Commodity based technology to build clustered computers (currently DEC Miatas - Alphas with Linux - at $5K each). Factor 5 times cheaper than customized solutions. Want to be scalable, manageable (remote boot, remote power cycling), 100 GFLOP/s theoretical peak FY97 (128 nodes), with multi clusters (Meta Clusters) requiring high speed networks including over the WAN. Uses Myrinet (Gbit/sec backbone message passing network). Form factor is a big concern since it defines the amount of real-estate needed.
IPv6 -- Helen Chen, Sandia
Using Ipsilon technology (IP switching over ATM). Want to have demo for SC97 at OC12 (carving out OC3c channel) between Sandia and San Jose booth. Also interested in collaborating with NASA and Fore. Expect to use RSVP, IPv6 with flow-ID and drop-priority, call admission control, priority scheduling with LBL's class-based queuing, weighted fair queuing. Working on traffic characterization.
3-COM Activities with NTON / CAIRN - Barbara Denny, 3Com
Collaborative Inter Agency Research Network a community to try out new ideas. Grew out of DARTnet (Defense Agency Research). Includes commercial companies (Cisco, Ipsilon, Sun, 3Com), as well as LBL, UCLA, NASA, SAIC, UCI, UCSC, SRI, CMU, Xerox PARC. Basically an OC3 network, uses Sprint and MCI (see www.cairn.net). It is a breakable (as opposed to production) network. Research areas include: security; mobility; distributed computing; multicast; RSVP; IPv6; and management. 3Com doing IPv6, RSVP, tools for network experimentation and analysis, identifying applications which drive the needs.
Distributed Physics Data Capture Management - Bill Johnston, LBL
Read and cache data, then make available via the Web building indexes on the fly. Idea for RHIC is to allow analysis at multiple sites without requiring a single enormous computing center. Want to start with simulation of RHIC data on a farm at LLNL and then distribute to LBNL via NTON where they have a high powered cluster and will store the data in real time to a mass storage system. Idea is to demonstrate viability of approach. Will use the HPSS system at NERSC. LBL has a lot of experience with monitoring and analyzing the network performance by distributing time synchronized (XNTP) monitoring devices at critical points (sounds like it may be related to the NIMI of Vern Paxson).
Berkeley Research - Bill ?, UCB
Main areas of relevance are: metacomputing; collaborative problem solving; multimedia authoring; network research, wireless, RSVP, integrated services.
UCB will be an Internet 2 GigaPOP (install Jan-98, gated by availability of Cisco BFR) which includes vBNS, CAIRN, CalREN2, and Internet 2. CalREN2 is a proposal to build OC3 network for State of California for R&E. Problem was universities could not afford the commercial Internet.
Berkeley multimedia is a regularly scheduled weekly seminar (Wednesdays 12-2pm) broadcast on MBONE since January 1995. 10-200 remote participants with remote speakers, scheduled rebroadcasts for other time zones with limited on-demand replay through the Web. Want higher bandwidth for higher quality feeds. Broadcast via ISDN, Internet MBONE, experimental MBONE (used to be BAGnet) with higher quality.
The model is to build an Internet Broadcasting System. This will support thousands of channels with live interactive programs, variable quality (100kbps to 20Mbps), replays with off-line interaction, archived, view anywhere (PC, Macs, Unix desktop etc.), incorporate with the Web (Java applets for control) etc. (see bmrc.ucb.edu will be available in a month to 6 weeks).
SLAC B-Factory Babar High Speed WAN Requirements - Les Cottrell, SLAC
See http://www.slac.stanford.edu/grp/scs/net/talk/nton-jul97/ for transparencies. Bill Lennon is checking out fiber availability. Current data accumulation rates, according to Terry Schalk, are ~500 TB/year. This corresponds to. ~4000 Tbits/year, ~ 10 Tbits/day, ~400 Gbits/hr, ~ 400 Mbps. For comparison STK Silo access is ~ 12 MBytes/sec/drive, ~100Mbps, so one would need at least 4 drives simultaneously transferring at 12 MBytes /sec. Switched 100Mbps Ethernet transfer rates memory to memory are ~ 90 Mbps or 10 MB/sec, so need multiple switched 100 Mbps connections, presumably with Gbps for the back-plane.
Aircraft Design - Marjory Johnston, NASA / Ames
Airflow design computation requires switched 100 Mbps Ethernet to support message passing interfaces (MPI) between multiple common off the shelf (COTS) work-stations (IBM SP2, SGI Origin 2000, HP 9000). They want OC3c (155 Mbps) at least. They could have a demo for Supercomputing (SC) 97.
Distributed Server Networks & the Delivery of Scaleable Multimedia Services - Dale Harris, Stanford
The research goals are to:
They are looking for partners to help with sites for distributed servers, and users of the application (distance education & training). They were using BAGnet, now that that is gone they are using the Silicon Valley Test Track (SVTT). They would like to use NTON in future.
High Speed Graphic Applications - David Meyers, NASA / Ames.
Real time wavelet compression echocardiogrpahic (EC) movies over a hybrid network. EC is a medical technique to use ultrasound imaging to the cardiac system, providing a "motion picture" of the heart in action. Requires 270 Mbps as uncompressed digital video. Will use on International Space Station as a tele-medical application with remote observation and diagnosis in several countries by doctors specializing in space medicine. Satellite down-link has limited bandwidth (20 Mbps total video bandwidth to space station) some compression required. Will use Wavelet compression as opposed to MPEG since less sensitive to data loss (Kaiser will not pay for surgical removal of artifacts introduced by compression). Research partners include NASA Lewis Research Center (Cleveland, Ohio), Cleveland Clinic, NASA R&E network (Moffet Field), Parallax Graphics (Santa Clara). ACTS satellite tests to be conducted in Oct-Nov 1997, NREN tests to be conducted during Aug-Sep '97, NTON tests to be conducted Oct-Nov 1997.
NTON / ACTS Experiment Summary - Julius Shu, Sprint
ACTS program includes 18+ experiments using 622 Mbps NASA ACTS. Includes LLNL, Sandia, UCB, Sprint Advanced Tech. Labs., UCSF, NSA, SRI, NASA/Ames, Kaiser Permanente, Hughes Research, Lockheed Martin, Telco Labs.
ACTS has 6 high data rate (OC12) ground stations (Hawaii, DC-Goddard, JPL, Bay Area, Sprint-KC, NTON) The ground stations can go up to OC48 (2 Gbps).
Current NTON Situation
Original program finishes September this year. Was funded by DARPA and others. They (NTON) are interested in wavelength on demand (i.e. wavelength dial-tone). Now NTON infrastructure has been established, it is recognized it would be a shame to terminate especially since it is just about ready to deploy for real applications. Have agreement in principal from major participants (includes Pac*Bell, Sprint, Nortel, Hughes ) to keep the network alive for the next 2 years. DARPA has expressed desire to provide continued support for NTON as part of their NGI contribution. Pac*Bell, Sprint need insight to the value of this work, is it worth the support level they have/will provide. At the same time, DARPA are more interested in putting the Defense back in DARPA. So it is important to showcase applications, and to identify to our sponsors, contacts etc., the importance of NTON. Bill exhorted us to consider the NTON in our future proposals. May even be able to get some FTE support out of NTON.
A particular recognition, by the military, is the importance of handling large data sets. They want to get access to well organized data for operational military people. This does not mean interactive airframe design, but rather they have needs for geographical information, weather records going back multi years. Monterey might be important due to its military contacts (base) and its physical proximity to NTON. The military also does not want to build infrastructures, rather they want to use existing infrastructures built by others (i.e. civilian, commercial). This causes concerns about the protection of military resources on a shared infrastructure. Thus security becomes a critical issue, and work on this will become increasingly important.
DARPA also want NTON to get together with NASA (particularly Ames), NSF and DOE. Come up with a White Paper that talks about the possible applications, research etc., so DARPA can carry it into policy meetings on NGI. This could indicate what bandwidth needs to be provided and what level of WDM (wavelength division multiplexing) is needed (e.g. are the proposed aggregates beyond what is currently available) and when. The vendors are also keenly interested in when it makes sense to provide for such applications in their public networks. For example when should vendors such as Nortel, Sprint be planning to deploy OC48 (2.5 Gbps) equipment and services?
Next steps are to bring up the ATM network, manage it, make interface to IP network. A large issue will be the routing issues when multi separate high-speed networks (some of which may be test beds) are connected together (e.g. NTON, ATD, various test benches etc.). People are pushing NHRP (Next Hop Routing Protocol) to provide cut-through routing for IP circuits. However, much of NHRP is still theoretical and not implemented or production ready. As a result the management headaches can go up exponentially as more nets join together. NTON wants to extend down the coast to UC campuses (may be hard to get to UCI), JPL / Caltech, and interconnectivity to a DARPA sponsored West Test bed in LA (connects up Hughes and other companies). NTON will send out a short questionnaire on what the requirements are from the applications folks who presented earlier. This will help figure how the current NTON will be used and how expansions maybe valuable and used.
Getting coast to coast wavelength fiber (high-speed pipes) is still extremely expensive. Vendors do not appear willing to provide such capacity coast to coast cheaply for test beds. Thus apart from satellite links NTON is not currently moving to going coast to coast with high-speed land line pipes. WDM is currently typically multiplexed 4 ways (4 separate wavelengths), within a year this is likely to get to 8 ways.