DOE Telecomunications Conference 1997 (TCOM97), Oakland, California June 2-5, 1997

Les Cottrell, June 4, 1997

Pursuit of Seamless Integration, Ted Michels, LLNL

There have been at least 5 national infrastructures in the history of the US: 1. Transcontinental railroad, 2 universal phone, 3 electrification system, 4 interstate highway, 5 digital net.

Future info access depends on universal access, human factors. Network appliances need to be pervasive, flexible, portable, global. Need high speed and ubiquity. Compression & security must be improved.

Future architecture must be customizable by user, distributed to user, available at the point of need of the user. Achieving this needs open systems, scalable, client server architecture. Human factors must easily conect people to people and people to information, must take advantage of human processing capability, must allow for a variety of human interface capabilities, must allow for easy capture & manipulatiion of information.

Future systems will have "knowledge" in the system, many operations & actions will be deployed through agents, translation of information into other forms needs to be done in real time, need integrated performance support to help guide probelm solving. Everything will be virtually transparent.

LLNL Open Lab Net (OLN) 12/4/90 to 5/14/97 was 99.69% up-time (including scheduled PM). Designed to stay ahead of customer needs. 1995 upgrade backbone to FDDI. Web server is an integral part of OLN business process. OLN has instituted Eudora email service. NAG oversees OLN operations. NAG includes 35 managers of LLNL's largest nets. 100% recharge (charge-back). OLN has over 150 subnets serving more than 400 buildings. 9K computers, 400 ISDN telecommuters, over 700 open terminal server users, over 1K Eudora POP users on OLN POP servers, there are over 7K QuickMail users, there are over 8K Email users, will drive out Quickmail users. Mail router for generic addressing.

LAN moving (FY99) to Gbps switch (not ATM), with 155Mbps ATM/ESnet to Internet. Decisions still to be made as to where to use copper vs fiber. Fiber optics communications backbone (1989 GPP, will complete this FY). Ducts were filling up. Put in multimode and single mode between buildings for both OLN and classified. Eliminated the need for frequent, short fiber pulls, maximize existing duct structure. Requirement for fiber to desktop has been slower than expected. Even the requirement for single mode has been delayed.

AT&T 5ESS switch installed at Lab 1989. Idea was to reduce LLNL's current & future telecommunications costs, improve ability to handle emerging data needs, increase connectivity outside LLNL. They have 17K lines (8372 analog, 8966 are ISDN). Nb 8K people, 21K computers. Offsite there are 157 analog lines and 435 ISDN lines. Again they do charge back.

Evolution of IS infrastructure has moved backward and forward between decentralized and centralized. Electronic Accounting Macines (EAM) in 1950s were decntralized, then moved to centralized DP in 1960s, then in 70's moved to division minicomputers, then in 80's to PCs (even more distributed), then LANs moved a bit back to centralization in late 80's, then infrastrcture became centralized, then Web (More decentralized). Emphasis is from automation/efficiency towards business integration, more standardization, cross-platform software, alpha-numeric => graphic=> Web based, flat file to relational to objective, interworking applications.

Working in 5 year stages, starting in 1988. Moved to Oracle client/server, including Oracle financials on Sun/HP. By 1993 most systems had been moved to Oracle. Need to review the product and technical standards at regular intervals. Since Oracle runs on all platforms they are able to be hardware independent. Planning window represented 2 to 4 "out" years. In 1995 new thrusts included object oriented development & production environments, web infrastructure, middleware and improved data warehouse. Warehouse started in 1987 was on Amdahl using Nomad. By 1998 everything is Oracle, including PeopleSoft (1st paycheck using new system cut in April 1997), Warehouse being moved, old IBM badging system being moved. So then Oracle, C++.

In 1995 an executive level business oversight committee (IBSIT) was formed. Next generation will include: a Web front end (Lab business via browser), zero training transactions (natural & intuitive), single sign on, strong authentication control, single point of entry, drill down, zero wait application deployment. These affect ALL future IS deployment projects at LLNL, not just the data warehouse. Have a single end user support infrastructure. Moving from cross platform to vendor independent, relational to relational/objects, uniform standards to open standards, graphical to Web based, business unit centric to enterprise centric. Netscape communicator, Oracle 8/Gemstone, HP, SUN, NT, Java. Reduce the non-value effort needed to run the Lab, provide "prosumer" (buzz word contraction of producer/consumer) with natural tools. What's coming that could make seams bigger/smaller: bandwidth ADSL, Cable, NGI; Internet telephone; PKI security, digital signature; network computers; object telephony; push/pull information access. Characteristics of future: travel without computer; ...

Bil Gates said: "There is a tendency to overestimate how much computing will change in 2 years, to underestimate the change in 10 years. In 20 years the computing will increase by a million."

Computing Sciences at LBNL - Sandy Merola

Will be the largest computing center in the world outside defense when they get their T3E-900 (400 peak GFlops). Have world class computing facililities and world class computing scientists. NERSC world leader for accelerating knowledge by use of supercomputing. Hard to hire networking people, had to be creative, hiring bonuses, competitive (with industry) salaries. On the other hand had no problem hiring PhD scientists to work on computing science. 43% of NERSC staff have PhDs. Hired people from Labs (ANL, LLNL, NERSC, PNL), Ames Research Lab, GSFC, Cornell, SGI, Intel, UCB, CalTech, Cray Research, IBM Almaden.

Practical approaches to building an effective telecommuting environment - Jim Leighton ESnet

Esnet 5TB/month accepted on Esnet. Announce move Nov '95 to move to LBNL. JFL took job, had running net & budget but no people, office etc. Moving meant an extra 50 minutes in commuting, also highly competitive job market for network engineers especially with proximity to Silicon Valley.

Put into place a telecommuting program with policy, facilities, hardware, transportation, maintaining presence. Policies include: working at home, safety health, environment, need a corporate policy; establishing a commercial telecommute work center (TWC)- this was a significant amount of paperwork to lease; who is allowed to work when and why - each professional are allowed 2 days at their choice at TWC or home. Group leaders are responsible for ensuring work assignments are appropriate. Facilities: establishing TWC: walls, mail, burglar alarms, supplies, video, fax, ISDN, network; home environment must reasonably match work environment.

Approach is hardware intense: intent is to make people effective & productive; some hardware must be duplicated (printers), some can be portable (laptop computers); cell phones. Transportation: parking space at LBNL is very scarce, encourage public transportation, provide van pool initially (one year), introduce people to advantages for longer term.

Presence: avoid people feeling isolated, require one day a week (Wednesday) as "all hands" day; video conferencing between TWC & LBNL; staff should be easy to locate & communicate with, regardless of physical location - LBNL office phone also rings at TWC; have established a series of staff web pages with status info (cell phone, pager), where expect to be each day, pages update automatically during the day. Must have an easy way to input and view information. Found to be very effective tool.

Observations: pros - work at home can be 2x productive, less time on road so more time working, effective recruiting aid, reduced traffic/pollution; cons - takes appreciable up-front costs, requires a good deal of trust and professionalism, depending on work/job assignment may not be feasible.

LAN Pactical Solutions - Gerald Cook SAIC

Current very flat topology with FDDI backbone. Move to switched full-duplex 100 Mbps backbone with desktops at 10Mbps. Use full duplex Ethernet, will take advantage of VLANs. Weekend cutover, attach edge devices, test and document each segment. Cable plant (including fiber) will be installed & tested prior to cut over. Vendor engineering team will be there from Wed thru Wed with cutover on weekend. Started (i.e. go ahead to do it) in Jan 97, hope to be done by Labor day 1997. 1400 users cost $290K (excluding in house emgineering or the fibers).

Pitfalls: network management tools, SNMP, Kaspia (new monitoring tool uses RMON tools/probes, provides baselines, notify of problems), NetDirector (by UB has problems with asynchronous connections), new NIC cards for servers that are connected to the current FDDI ring.

Using Aperture (only a drawing tool), would like to have a more automatic tool. They have been very aggressive in this so they really know where things are.

Internet Monitoring - Dave Martin, Les Cottrell, Connie Logg HEPNRC

FNAL WAN traffic 48% Edu, gov 15%, it 7%, com 5%, jp 3% ...

ISP connections to Internet are not "seamless". For top 10 sites, only 50% are ESnet sites. Multiple nets (e.g. 6 ISPs between SLAC and UPENN). Only look at End-to-end.

Uses: determine need for upgraded link, complain to vendors in a constructive (provide backup data) way; determine relative performance of ISPs; determine the change after the network "improvements".

Collecting sites: US (ARM/PNL, BNL, HEPNRC, ORNL, SLAC, UMd), Italy, Canada, Switzerland.

HEPNRC developing code to automate data retrieval of data from collecting sites; SLAC is developing better analysis tool. Future: integrate SLAC long-term analysis into WWW form; allow users to choose groups of links rather than choose individual links; increase number of sites collecting. Looking for volunteers.

Q: Is there any work anywhere to port to NT?

Q: Is it done from inside or outside the firewall?

Q: What does one do when one findds a problem, how can you get multi ISPs to take ownership of the problem?

Q: Could this be commercialized?

Next Generation Internet in California - Susan Estrada <susan@aldea.com>, <http://www.aldea.com/cenic>

CENIC = Consortium for Education Network Initiatives in California. President & CEO for Aldea.

Overview: What is CENIC, purpose of CalREN-2, Advantages of the Next Generation.

Bill Gates once said "640K outta be enough for anybody".

Why: Todays commodity Internet is useless to many researchers. Industry can't focus on the next generation internet since they are too busy funding growth. California's educational institutions collectively can bring powerful leverage and talents. Includes UC, Stanford, Caltech, CalState, USC (nb 50% of engineers in Silicon valley are graduates of the UC system).

CENIC will: Put California higher education & research in a leadership position for internetworking technology; create a significant pool of experts; leverage buying power of participating institutions; provide focused investment strategy; become a focal point for national internet projects; provide better "commodity" Internet performance; build an advanced robust infratsructure.

What is CENIC: 501-c3 educational charity; board of reps (UC, Cal State, USC, Stanford, Cal Tech); staff, executive director; advisory councils (technical, management, research, education) composed of people who pay the bills.

Types of Networks in the NGI: networking research (e.g. wavelength division technology); research networks (focus for CalREN-2); operational networks (NSI, ESnet); production networks (can buy from somebody).

NGI: CENIC will connect (in the center) to vBNS, Internet 2, Fed's NGI, NASA, ESnet, plus other initiatives.

CalREN-2 modeled on prior success stories of collaborations. It is a research network, it is applications based, tests new technology, building a "statewide" laboratory. It will: test key technologies such as QoS, RSVP; cost allocation models; performance benchmarks (key is real applications); encourage interoperability in particular vendor cooperation. Baed on Sonet rings at OC48 (one in Bay Area, one in LA basin); fresh-off-the-line hardware (Cisco looks like chosen vendor); applications. Gateways into ESnet, vBNS, and NSI. The CalREN-2 Sonet rings are basically (AKA) virtual Gigapops with interconnects at OC12 and OC3.

Each site will have a Sonet Mux from phone company (also managed by phone company), that goes into an ATM switch (probably Fore) which connects to the campus IP switch. Late fall for phase 1 installation (gated on hardware availability). Ongoing recruitment for future phase participants including government, commercial etc.

How much: depends on private sector partnerships, more than price, defines common interest areas. Depend on university/government partnerships. NSF grant award: $3.8M for 2 years.

Bay Area includes UCB, UCSC, and UCD.

Collaborative Conferencing - Jim Berry & Diane Gomes SNL/Alb.

See http://www.ca.sandia.gov/sts for more information.

What's it for: enables exchange of information & ideas in real-0time to make better, faster decisions without leaving office.

How is it used: share applications & edit files even if only one person has a copy of the application. Use whiteboard (WB) and messaging to sletch ideas, mark up documents or record action items, transfer files remotely, used shared clipboard.

Real-time video & audio is available today but technology is still maturing, the phone is the preferred method for audio communications.

The important standards are: T120 ( data sharing gateway), H320 (ISDN conferencing), H#@# (LAN conferencing), H324 (PSTN Conferencing). Standards mean using least common denominator and limit technology improvemenst, BUT they provide cross-product interoperability and protect equipment investments. It is critical to the usccess of this market to get seamless interoperability, we do not have that today.

To make the video work well one needs video grabber interfaces and drivers to be uniformly available, to use audio one needs full duplex audio interfaces and drivers to be uniformly available. We are not there yet. Thus at the moment many engineers who have tried it tend to say they don't need inadequate talking heads, but they do find the data conferencing to be extremely valuable.

The speaker gave a demo of using Micro$oft's NetMeeting (which cane from PictureTels' Livewire tools) with no video or audio, just data conferencing, where the two users shared a whiteboard, had as ASCII chat window, could cut and paste between computers, shared a document which both users could edit via Word etc. It was impressive and would also be very valuable for doing "side by side" consulting where the expert can have a copy of what the user is doing and guide them through things without being physically colocated.

Apparently Farallon are putting hooks into a conferncing tools they have (Netopia?) to make it T.120 compliant so it should work with another T.120 capable application (e.g. NetMeeting).