Site Coordinators Committee (ESCC) Meeting
Jefferson Lab, Newport News, VA, September 30 - October
ESnet Site Coordinators Committee (ESCC) Meeting *
ESnet Network Group Status Update - Joe Burrescia *
ESCC NMTF/NMFG Status - Les Cottrell, SLAC *
IPng WG Status - Bob Fink, ESnet *
Multipath Routing Protocol Test (Steve Batsell) *
Discussions in the Hall *
Report from Washington - George Seweryniak *
ESnet Steering Committee Report - Sandy Merola *
ESnet Report - Jim Leighton *
Network Information & Services Group Update - Alan
Sturtevant, ESnet *
Video Collaboration Services Scheduler - Craig Tenney,
Security Issues *
NGI Futures - Bob Aiken *
Java Based Applications - Dave Dowty, Christopher Newport
A Coordinated Browsing System - Mohammed Zubair, Old Dominion
This was the Fall meeting of the ESCC. There were three
attendees from SLAC including Warren Matthews, Bob Cowles and myself. The
first afternoon was devoted to the Network Working Group, the second day
to plenary issues (updates from DOE, Esnet, the ESSC, networking WG report,
Esnet information and services etc.) and the last 2 days to working groups
on distributed computing (DCE, remote conferencing, security issues, directory
services etc.). I attended the first 2 days that covered the network and
plenary issues. There was considerable interest in the SLAC/HEPNRC monitoring
tools. The exclusion of DOE from the NGI (Next Generation Initiative) was
a hot topic. Funding may still be available for DOE Mission-oriented applications
that are network challenged. Interest in security is increasing. As is
often the case much of the useful sharing of information came from break-time
discussions so I have included a section entitled "Discussions in the Hall"
to cover this.
Network Group Status Update - Joe Burrescia
Two new members, one Chin Guok is working on monitoring.
Set up special scoping for multicast for Russian videoconferences.
Also investigating PIM running on routers carrying full nodes. Sparse mode
may be problematic. Dense mode allows pruning. Sites need to be aware of
setting up tunnels to non-ER funded sites, since ESnet not authorized to
carry non-ESnet traffic.
Several real-time events have been supported recently including
Milwaukee collaboration, DOE 2000, a Whitehouse demo, ESSC. They will support
the SC97 conference (http://www.supercomp.org/sc97),
will pull in an OC3.
IPv6 & 6bone
Lot of work on BGP4+, new sites, new registry etc. They
have three interworking BGP4 implementations (one is from Cisco). Trying
to bring up a mail host supporting IPv6 natively. They plan to run IPv6
over ATM; an IPv6 over ATM PVCs RFC was just published. Cisco says will
ready to implement when spec is ready.
Spectrum Network Management
Spectrum works well for LAN, not so good for ATM/WAN.
Cabletron claims they will provide better WAN/ATM support for Spectrum,
but it has not happened yet. Moving to version 4.0.3 which has monitors
for alarm thresholds. Joe is interested in setting up an ESnet Spectrum
They are working on a new statistics system called BestView.
Has been in alpha/beta for last 6 months. In process of cutting over from
in-house designed system.
Increasing number of network attacks
Ping attack sends an ICMP request to a broadcast address
at your site with a spoofed source address. Need spoof filtering. This
swamps the spoofed source with ping responses. Turn off IP directed broadcast,
if possible. However this will break DHCP helpers, and is rumored to cause
problems with the Microsoft browser. May have to deep six the packets at
the firewall if they are addressed to the all-1 or all-0 addresses. If
you are a victim, then will see lots of ping traffic. It is hard to find
the real perpetrator. Can only track back to where the packet enters the
NMTF/NMFG Status - Les Cottrell, SLAC
WG Status - Bob Fink, ESnet
For more see http://www.6bone.net/
The IPng WG is really dormant while various things happen,
the 6bone evolves, sites connect to the 6bone with ESnet help, it waits
for real activity.
What is the IPv6?
An international testbed to test IPv6 implementations
& standards, tryout IPv6 transition strategies, get early applications/operations
experience, motivate implementers & ISPs, get experience to try IPv6,
and to start the transition.
Other areas besides 6bone are doing testing including
trade shows & UNH.
Renumbering of sites in IPv6 is a very important issue
to get round of sites moving ISP and not giving up site address, and so
making the new ISP carry a new lot of routes.
FTP Software is dropping their development of a Windows
IPv6 stack. Microsoft is believed to be willing to implement IPv6 when
they have the common driver format in place (WNT=W95 drivers), and things
look a bit clearer. The IPv6 folks do not want to bring in Microsoft until
more issues are resolved and it becomes less of a research project. Two
people in Sweden are working on a WNT implementation. Cisco version is
currently not optimized (it runs in processor mode).
There is still a need for an IPv6 over ATM specification.
Someone from Ipsilon was working on a spec, Japan also claims to have one,
plus another one appeared. Cisco is said to have completed an implentation
of a specification, but unclear which one.
Mike O’Dell (UUnet) raised concerns about the old addressing
structure, which resulted in Aggregation Based Addressing. Hope most ISPs
in 4 or 5 years will be converted to IPv6, but some will not be there,
so how does one skip across the intervening IPv4s ISPs. The idea is to
use Next Hop Routing Protocol (NHRP) server that will return the Ipv4 tunneling
boundary point so can tunnel to it and then it goes via IPv6 within the
IPv6 cloud. It will be an extension of NHRP.
Besides running out of IP addresses in IPv4 the current
IPv4 infrastructure is suffering from complexity explosion. To address
this they introduced Aggregatable Unicast Addressing. The TLA is for <
8000 big players (MCIs, Sprints etc., but unclear how one determines who
is a TLA, and who makes the determination). This limits the size of the
routing tables that have to be carried by the TLAs. I.e. only need to know
how to reach the right TLA then the TLA will determine how to get to the
NLA. This removes the need for a centralized registry (as long as one can
assign TLAs). SLA is for the site; it will probably be the site subnet
number. An important advantage of the AUA is important for allowing site
The EUI-64 Interface ID are used to identify interfaces
on a link. The IEEE EUI-64 format has an extended IEEE 48-but MAC address
embedded in it. The old Ethernet address consists of a manufacturer code
(cccccc) and a device field (eeeeee). The new EUI-64 that incorporates
this is thus ccccccFFFFeeeeee (where FFFF indicates an Ethernet).
Selling IPv6 will be important. It will need to have a
transparent conversion; desktops will need to be delivered with both an
IPv4 and an IPv6 stack. It is also has to be seen as not a choice since
Ipv4 will not meet the needs. If IPv4 goes away it will be very slow and
long term. New devices (e.g. traffic light devices) will probably only
come with an IPv6 stack.
People are starting to do IPv6 pinging so they can ensure
they have a production backbone.
Bob feels that IPv6 is still 2-4 years away (i.e. before
you can make an honest call that it is a success).
Routing Protocol Test (Steve Batsell)
For more see http://www.epm.ornl.gov/~sgb/net.html
Conventional routing optimizes a single metric such as
delay, hops, bandwidth, jitter, shortest path or shortest-widest path.
QoS routing selects path to meet QoS requirements. Batsell/Rao have implemented
a multipath routing algorithm and will incorporate into gated.
Spring 98 want to do a test on a Morphnet version of ESnet.
If interested in partnering send email to firstname.lastname@example.org
More efficient way of doing QoS, and reduces the risk
in the Hall
Coordinated Password Files
ANL has one DHCP server. It is for the business services
type people only. It is probably based on a WNT platform. It is not centrally
BNL does not run a DHCP server.
FNAL is looking at running a single central DHCP server.
It will be centrally managed. Mark Kaletka believes it is based on the
Cisco DHCP server and runs on a Sun. They give out dynamic IP addresses
with dynamic names (e.g. temp1). They keep a log of Ethernet addresses
to IP addresses for auditing. They also support static DHCP. The contact
is Matt Crawford <email@example.com>
CEBAF has no DHCP server.
ORNL provides a single DHCP server for its PPP service.
LBL runs a DHCP server. They give out dynamic addresses.
For conference rooms they allocate fixed addresses so that they can have
demos with computers that do not support DHCP clients. They have a Web
page with the IP addresses for conference room taps.
ORNL have built some tools that allows one to enter a password
for an account into Unix and then have that same password/userid placed
in the WNT registry and the VMS userid/password files. It is called CAMS
and the contact for WNT is Sandy Guinn.
Mark Kaletka said that FNAL have something similar. The contact
is Keith Chadwick firstname.lastname@example.org.
Charging for Network Access
Jim Rome, of ORNL, showed a videoconference of a meeting
being broadcast (by streaming video) from Gatlinburg. It was using a product
from RealAudio (see http:/www.realaudio.com/)
which was impressive. The resolution looked like about 400x300. It was
using about 100kbps. The audio was excellent; the video was also very good.
They cache the feed so it is about 20 seconds behind, and this is how they
get the good performance. It was very well reviewed in a recent PC magazine.
The cost is about $8000 for the server to support up to 80 clients. The
client software is freely downloadable over the Internet and is bundled
in MSIE 4.0. Microsoft recently licensed the technology and bought 10%
(non-voting) stock from the company. Jim is also keen on using Internet
Relay Chat. One collaboratory function he also likes is the new Visual
IRC product (see: http://www.virc.com/)
for Windows. It also has some form of video support and it is free (at
least at the moment).
LBL has a joint project with industry on diesel engines.
The industry folks are very interested in NetMeeting, and so LBL is also
getting more interested in it. Stu claims there will be a NetMeeting for
Unix (is only Windows at the moment) and that it will support multicast
(it does not support multicast at the moment). There is also interest in
something called Hub & Arrow since it allows one to add features such
as "floor control" (e.g. who can write on the White Board during a presentation,
who controls the microphone). The Applications Working Group has a web
page at: http://www-itg.lbl.gov/AWG.html
from which one can get to useful notes (FAQ) on how to use the video tools.
CEBAF is a single purpose Lab and does not charge outside
users for the network connections, unless it requires a major building
rewiring or some other major effort. They have only a few 100Mbps ports,
and are giving them away to power users who have a verbally identified
need. If they do not feel the request is justified then they will provide
the 100Mbps connection as long as the requester pays their part.
ORNL charges ~ $40/month for the basic UID service, this
includes email, access to the help desk, various accounts, access to the
online databases, insertion in the phone directory, etc. They also have
a $15/month charge for providing WAN access to the Internet and the intersite
infrastructure. For connections on public networks (i.e. plugging in a
machine to a wall jack on a public network) they charge ~$20month. This
fee is based on cost recovery of the network services (i.e. takes the LAN
network budget and divides it by the number of IP addresses registered).
They also have a substantial amount of private networks which are not charged,
but they are moving increasingly to public networks, as users need better
connectivity and recognize the costs of running their own LAN. To ensure
that hosts are registered in the DNS server, the Web server looks at the
IP packet, does a reverse lookup to the name server, and if the node is
not registered then they do not serve up pages to it. They do not charge
for the initial connection. There is also an assessment of ~ $7/month on
public ports at the Lab to enable continuous evolution and upgrading. People
who want extra resources (e.g. 100Mbps ports) are addressed on a case by
case basis. Sometimes the network group covers the initial connection cost,
sometimes the user covers it, and sometimes it is shared.
At LBL most of the networking is charged to infrastructure.
They long ago figured it was too complex and costly to try and charge based
on usage. They do charge one time for new hookups. The costs used to be
$260 for a 10Mbps shared port, $480 for a switched 10Mbps port, & $980
for a 100Mbps switched port. They are reviewing the charges based on new
Ethernet equipment. This includes: the Cisco 1900 with 24 switched 10 Mbps
ports with a 100Mbps feed for < $60/port; the Cisco 2926 with 26 10/100
switched ports for <$400/port. Bob said that Bay's new switched Ethernet
offerings look much better than Cisco's price wise, and hopes Cisco will
become more competitive. The above costs do not include core switches or
routers that are not charged back to the users. They are skipping providing
100Mbps shared Ethernet ports, everything is moving to switching. The avoidance
of 100Mbps shared is due to:
the limited savings compared to switched 100Mbps;
the increased problems with isolating problems compared to
switched Ethernet; problems with using auto-sensing ports (what if there
is nothing on the segment and a 10Mbps only workstation connects up, the
whole segment will be at 10Mbps even though many of the workstations could
The reduced bandwidth availability is due to half duplex
and sharing etc.
Steve Batsell of ORNL has been working with a Ph.D. candidate
to look at how to use statistical experiment design to see how to optimize
network monitoring. The goal is to get the network monitoring to provide
the optimal information about what one is interested in, for the minimum
amount of resources used (number of collection sites, number of remote
sites monitored, amount of data collected etc.) They used the early tools
that were developed between SLAC/ORNL/HEPNRC for gathering the data. I
had a short look at the thesis, it was heavily oriented to statistics (as
opposed to networking) but it looked very interesting. Anyhow Steve hopes
to get some funding to support further effort in this area, and wants to
collaborate with SLAC to get the latest tools and access to all the data
we (SLAC/HEPNRC) are archiving. He hopes to visit SLAC in late October/November
to discuss this further.
ORNL has hired someone to take over from Gary Haney (who
was doing the WAN monitoring at ORNL and left to head up a network group
at a local hospital). There is a person in the network research group (Lawrence
MacIntyre) and another person in the operations group who will be involved
in picking up the SLAC/HEPNRC code and making ORNL into a Collection site.
Bill Wing agreed to take the lead on finding a place to publish
the paper we (HEPNRC, ORNL & SLAC) put together last January for submission
to the IEEE. We (Bill, Dave Martin & I) felt it was worth the effort,
and that some revision might be needed to bring it up to date.
I talked to Mike O'Connor of BNL about the traceroute CGI
script BNL has. Terry Healy developed it. Mike agreed to talk to Terry
about making the tool public domain and providing instruction on how to
get and install on the Web. If this was done then we could put a pointer
to the tool on our Web page and encourage collection sites and remote sites
to install it. Mike is also interested in Java applets so I showed him
Mapnet and discussed with him about extending it to show our data on performance.
Mike also mentioned that BNL has a useful tool that displays nodes registered
by subnet, nodes responding to pings, free addresses by subnet etc. He
is willing to share it with others.
BNL is also interested in installing a Surveyor; I will pass
on the information to Guy Almes.
We need a name for the SLAC/HEPNRC monitoring tools. It has
to do with "brand name recognition" so each time one talks about them one
does not have to describe in detail what one is talking about together
with who should take credit etc. I discussed this with Bob Aiken and he
agreed. He said having the name Morphnet (Multipurpose Operational and
Research and Production Hierarchical network) to refer to the idea of building
a multilayered network with both production and research parts at each
layer has been enormously helpful in promoting the idea. A couple of ideas
came up PingEM (for Ping ESnet monitoring, or Ping End-to-end Monitoring),
and PingWAN. I am also discussing this issue with the HEPNRC folks and
previously had suggested pmeter and pmon.
Esnet has hired a new person, Chin Guok, to look after network
monitoring. He appears very interested in the SLAC/HEPNRC tools, though
his main emphasis is on understanding the performance of the overall Esnet.
We had several discussions on how they could use our tools, and I encouraged
him to install them at LBL/Esnet.
The DoE budget has been approved by the joint House &
Senate Conference Committee and sent to the President.
from Washington - George Seweryniak
Large scale networks:
ESnet & NGI programmatic Goals
The FNC and FNCAC have gone. Will be picked up by a committee
(LSN (Large-Scale Networking) WG - falls under NTSC/CCIC under Computing
Information & Communications R&D Subcommittee) led by George Strawn
& Dave Nelson.
PSWG/CIS privacy security working group will continue for
information sharing among agencies. Report due out end of this month.
EOWG è JET (Joint Engineering
Team) Look at sharing of networking among the agencies will take up work
of FNC. Working on getting a common AUP. Proposal to also worry about International
ESnet progress report expected early CY 97, is a little behind,
it is an important report in terms of informing the government what we
NGI Concept paper/Impl plan Jul 97 complete (see http://www.ngi.gov/
NGI workshop May 13
ESnet video support re-evaluation mid CY97 complete, there
was a threat that support would be cut off in favor of commercial service,
ESnet program plan mid CY97
NGI budget info late CY 97 (DOE got zeroed out, did not lose
any money), will do NGI with internal funds
ESnet program review mid CY98 (question is the review for
the DOE or the ESSC)
ESnet follow-on RFP release mid CY 98. The current contract
expires 1999, last one took 2 years to award with all the protests, Jim
expects to have the RFP ready early 1998, could be problems due to conflict
with FTS 2001 will be awarded at the same time and is expected to cover
many of the services provided by ESnet.
This is a major issue, has to do with follow on contract
for ESnet. ESnet has done well, funding is stable
NGI & DOE funding issues (all 1998). Will do more publicity
and try to get extra NGI funding for 1999.
Current contract expires
Post FTS-2000 (FTS-2001) issues www.gsa.gov
R&D vs. Production (need to look carefully at the balance,
in particular is it a research network, or a network that supports research)
Other agencies (NASA is on the present contract, should we
include other agencies in future contract, should we join another agencies
contract, pro it could give better prices and better interworking, but
could be harder to award)
Circuit switched (e.g. ATM), satellite services (mobile &
fixed), video conferencing, EMS (X.400), electronic commerce, video teleconferencing,
international services etc. so it will need a lot of substantiation as
to why ESnet is different and should not use FTS2001
Steering Committee Report - Sandy Merola
Need program reps. To provide input to Program Plan
Need Program Representatives to provide input to the Progress
Need committee to carefully analyze our future requirements
in light of FTS2001 and NGI (concerns over protests for RFP award for ESnet).
Need ESSC to provide input on the R&D production futures
of ESnet & follow on contract to DOE & Jim.
ESnet university connectivity policy.
Payoff not proportionate to efforts of hosting regional meetings,
the DOE plan, the LSN, the NGI R&D workshops.
The counter forces were politics, congress seems disinterested
in DoE's role in research
Expected positive outcome was $0.0
The known outcome: new policy restricting university connectivity
DOE Corporate network
Approving the direct connection of a university will now
also include a letter from an appropriate official of the university. MICS
will re-evaluate existing direct university connections to ESnet. Universities
with existing ESnet connections will need to affirm their need to connect
EMnet the DOE business & corporate network.
There were early offers to carry their traffic & work
with us. It is a separate non-ESSC issue; it is a potential site issue
that the ESCC may serve as a reasonable forum for. There will be multiple
networks in the DOE, the DOE EM, ESnet, and the emergency network. These
networks will touch.
There will be a document recommending the creation of the
EMnet to the DOE CIO (Woody Hall), and the ESSC will be requested to comment
Longer term issues
Greatly increasing requirements
No increased funding
Focus on CERN:
HEP want better access to CERN
Increased funding requested but the ESSC was unsympathetic,
given the constant funding
Changes in routing requested allowing direct routing access
from all HENP including partner Labs & universities. The ESSC concurs
with the goal, but wants to get a hassle-estimate from ESnet
ESSC must do its share to get networking to be a more important,
visible & funded effort by the DOE.
Applications Requirements Working Group
ESSC agrees that supporting production & research on
the same infrastructure is desirable
ESnet is not just an ISP
MICS provides not only ESnet but R&D of the needed future
MICS: QoS by over-provisioning bandwidth will be non-affordable
Cost per bit will decreases, but demand is rising, so net
Report - Jim Leighton
Has been formed to help ensure that future network requirements
of the ESnet community are identified
The process includes the following steps
Extract currently documented applications from the draft
ESSC review (ESCC)?
Working group will work with network providers & researchers
(Leighton, Steves, Loken Jacobson) to id needed supporting networking services/research
Deliverable white paper identifying issues etc by Jan 1998
July 1998 18.8 Gpkts accepted (approx. 0.7% DECnet), 5.22
Tbytes accepted, 277 bytes/pkt.
July 1997 10.7 Gpkts accepted (approx. 2.96%), 3.03 Tbytes
accepted, 283 bytes/pkt.
DECnet was holding steady while everything else was increasing,
over lattes few months DECnet dropped off by factor 3.
Moving 50 Mbps into & out of network during heat of the
INEL T1 via LLNL, link installed, awaiting routing plan resolution
Chicago hub with ATM OC3 installed May 1997, ANL @ OC3 installed
Jun 1997, FNAL @ OC3 installed September 1997, the NAP @ OC3 is expected
October 1997 (upgraded from T3 to support NASA requirements).
Albuquerque hub at OC3c expected anytime now
Connecticut Ave at DS3 was installed Jun 97, will upgrade
to OC3 to support NASA.
Perryman at DS3 installed August 97, will upgrade 2xDS3 to
VBNS T3 via GA to SDSC operational ??/97
Network Virginia local interconnect @ DC hub operational
George Washington U T3 via DC hub will pay for T3 access
circuit, HQ letter to be written.
Human Genome Center T3 via Oakland hub, a new center is being
established in Walnut Creek CA(joint LLNL, LBL, Berkeley effort), operational
status expected Spring '98
LIGO (Laser Interferometer Gravitational Observatory) NSF
project: NxT1 via PNNL, is an experimental facility near PNNL that needs
connectivity to Caltech & other collaborators.
ANL moved to Chicago hub, completed Jun-97, upgrade OC12c
expected early 1998.
FIX-West > upgrade to T3 expected ?? via Oakland hub
FNAL > move to Chicago hub installed Sep 97, upgraded to
GA > local loop upgraded to OC3c expected Sep-97
ITER-US > local loop upgraded to OC3c expected Sep-97
JLAB > moved to DC hub @ T3 completed Jul-98
LANL upgrade to OC3c, expected September 1997
LBNL to move to OC12
MAE-East moved to DC hub completed Jun 97
Upgraded to 10 Mbps/T3 connection, T1 was very heavily congested
MIT upgrade to T3 completed Jun 97
ORNL upgrade to OC3c completed Jun-97
Sprint NAP upgrade to T3 via PNNL
Begun removing FTS 2000 T1 circuits, minimal number left
for keep alive.
All this has cleaned up the architecture by using Hibbing
VBNS peers today
There is a new DOE policy that requires a written letter
from university for direct ESnet connects.
VBNS is moving ahead with Internet 2 & NSF "Connections"
program, the GigaPOPs seem to be stalled (for the most part), little/no
info on universities-GigaPOP binding, no schedule information. May expect
more activity by the end of 1997. Part of the problem is the cost; they
did not get as much discount as hoped for from MCI. The GigaPOPs are very
interesting to ESnet since it could provide a rational way to connect up
a lot of universities.
ESnet has established peering with vBNS at SDSC, MAE-East,
and the Sprint NAP. Perryman and Ameritech NAP are in progress. University
interest is high in reaching DOE National Labs.
Likely new peers:
AADS (Ameritech) - Northwestern U, U Chicago, U of Illinois
at Chicago, U Minn, Merit, U Mich, Michigan Sate, Iowa State, Notre Dame,
Indiana U, U Wisc-Madison
Perryman - JUHU, Umd, Upenn, U Virginia, Old Dominion
SDSC (CalREN2) - CIT, UCI, UCLA, UCR, UCSB, USC, USC-ISI
Connecticut Ave - Network Virginia, GWU
No Cal (?) CalREN2 - Stanford, UCB, UCD, UCSF, UCSF
Atlanta hub - Georgia Sate, Georgia Tech, UT,
Sacramento - Oregon State, University of Oregon
Bad list of Universities from ESSC study was (+ already there,
- soon (we hope)):
CMU, Cornell, NorthWestern, (Chicago & Evanston), U Washington,
U Oregon, Harvard, Duke
The poor list was:
University of Washington - is planned to have vBNS connectivity
Johns Hopkins - will peer at Perryman
U Oregon - via vBNS at Sacramento
Harvard - via vBNS
Duke - could peer with vBNS at Atlanta
UCSD + UCnet
UMd - Perryman
U Michigan - Chi NAP
U Colorado + vBNS
U Wisconsin + FNAL/NAP (ATM tunnel)
U Pennsylvania + vBNS
U Minnesota - Chi NAP
UC Irvine + UCnet
TEN-34/155 has approx. 300Mbps cross Atlantic traffic, European
cost=$40M/year, essentially no contribution by US (see www.dante.net/ten-34)
DFN considering putting voice over ATM, are doing native
ATM pilots, expect transition to OC12c next year, have T3 connection to
TEN-34 up & running, 2*T3 to US 20MDm/year. Traffic to Germany appears
to be 3 times that coming from Germany.
GARR has 250 sites, 30 INFN points, 70 universities, GARR-B
next phase TEN-34 Oct97-Jul98, 155/622Mbps in '99. Seem to have 4-5 nodes
running ATM. Looking for 45Mbps transatlantic bandwidth via Telecom Italia
to be delivered summer '98. Install GARR router in Perryman, move T1 to
ESnet from PPPL to Perryman
Japan ESnet 1.5Mbps operational Sep-97, Beijing moving up
to 128kbps, Novosibirsk to be installed at 128kbps. NACSIS = 6Mbps to US
(heavily saturated) almost unusable going to T3 to SprintLink Oct 97, NACSIS
has 2Mbps to Europe. Major problems are connections to US universities
and to TEN-34. They want connections to universities via ESnet.
UK UKERNA = 155Mnps for national academic net, use TEN-34
for European connections, have T3 to US.
Canada has 1.5Mbps link via PPPL. CA*net II production network
to support advanced research has OC3c to STAR-TAP, will support all CA
universities & labs, will use GigaPOPS, they view the STAR-TAP as the
center of international connectivity.
ITER is quite happy with network support, want better access
to Russia (Kurcvhatov), ITER project end in 1998, then want to go into
a 3 year pre construction phase.
CERN will have 2*E1 transatlantic, direct connection to ESnet
planned. Will provide QoS via frame-relay on CERN end. Also looking at
Committed Access Rate capability from Cisco.
DESY has poor access to Japan
Hub connections for International Links
JAERI was 256kbps, 768kbps/1Mbps from NAKA-LBNL
Problem with providing "default"
Status: operational Sep-97vBNS, DFN (2*T3), CERN (2*E1 working
with CERN to install), INFN (T1, Dec-97)), DANTE (T3, Dec-97) with ATM
interconnect, looking at a peering at T3 access to vBNS which is also located
at Perryman with an OC3 to vBNS
NIFS (was 64kbps)
128/256 kbps FR Tokai-LBNL
Status: expected operational Sep-97
KEK was 512kbps
T1 circuit Tsukuba - LBNL
Status: expect operational Sep-97
Providing temporary default routing
Perryman (it is a big MCI POP near Aberdeen Proving grounds
in Maryland, I think):
vBNS, DFN (2*T3), CERN (2*E1 working with CERN to install),
INFN (T1, Dec-97)), DANTE (T3, Dec-97) with ATM interconnect, looking at
a peering at T3 access to vBNS which is also located at Perryman with an
OC3 to vBNS
Sprint. Has a big POP at Connecticut Avenue where ESnet is
Issues with International Links
Plus ESnet has 2*T3s between Connecticut Ave and Perryman
At Connecticut Avenue there are connections to ESnet (T3è
OC3 with connections to NASA/NREN), DOE-GTWN (3*T1), MAE-East (T3), Network
Virginia, Georgetown University, JLAB (T3).
Upgraded ESnet connection to DFN via Perryman, bandwidth
& cost - draft agreement, guaranteed bandwidth CBR.
CA*net T1 to PPPL operational Sep-97, interested in access
CERN carry as ESnet "semi-primary" site.
DANTE plans to establish a POP at Perryman
General university access
Contract with Sprint runs out soon. Looking at negotiating
a new contract. Will compete. NASA/NREN interested in collaborating.
Will use successful aspects of current contract including
advanced communications services, partnership arrangement working within
vendor's general strategy, term = 3+1+1 years, highly flexible contract.
Need to consider whether to coordinate with other DOE/Fed networks (e.g.
Schedule for reprocurement:
ATM ABR, SVC,
hubbing & collocation support,
transition support from current contract,
dealing with growing bandwidth requirements with fixed budgets,
local-loop costs (1/3 budget goes into last mile).
Outline general requirements, approach, schedule - 4Q97.
Establish working teams (have a volunteer from NASA), evaluation,
procurement - 4Q97
Do initial vendor visits - 4Q97
Refine approach, solicitation - 4Q97
Complete procurement package - 1Q98
Do vendor briefing - 1Q98
Release procurement - 1Q98
Complete evaluation - 2Q98
Develop transition plan -3-4Q98
Begin transition - 1Q99
Complete transition - 3Q99
ABR (Available Bit Rate)- fair sharing of bandwidth
ESnet ATM users, international
CBR (Constant Bit Rate) dedicated line emulation
UBR (Unspecified Bit Rate) no guaranteed service
RT-VBR (Real Time Variable Bit Rate)- delay & jitter
Experimental control (?), video
SVCs (Switched Virtual Circuits)
All the above capabilities on demand
DOE corporate network
Class Based Queuing (CBQ) being promoted by Van Jacobson
uses a spare bit in the IPv4 header.
Integrated Services (IETF model), least likely to make it,
it tries do all things for all people, but is inordinately complicated
Coupling of IP & ATM QoS capabilities
Allocation of resources
Interior vs. exterior QoS
QoS performance levels
IPv6 based collaboration with
Ellemtel ( a non-profit co-owned subsidiary or Ericson &
Telia in Sweden) various network trials of native IP multicast, native
IPv6, & IPv4 & IPv6 QoS mechanisms
May be willing to pull a T3 into Perryman
Proposing to establish an alpha ESnet backbone research &
trials for emerging technology will use PVC connections on existing ATM
Other project testbeds are being considered
DoE's Information Management Council (IMC) has tasked the
DOE Networking Group headed by Tom Rowlett of HR, to create a business
plan for a DOE corporate network.
The precise nature is not understood, for example security
requirements, however it could clearly impact ESnet, the Labs & the
It is generally recognized at this time that the creation
of a new network is not advisable thus an issue facing the DOENG is "Upon
which existing network should the DOE corporate network be built?" it is
very clear that most of DOENG wants to build its own network.
It could make life more complicated for network people at
sites, e.g. to decide how to route packets if the site connects to both
ESnet and DOENG.
Information & Services Group Update - Alan Sturtevant, ESnet
Services, overview, Impact on Science
Short videos to show impact of networking on research collaboration
with US industry, distributed computing support, support for other programs
through virtual network support. Will make available over the Web in a
variety of formats
SC97 support (vBNS will not support this year), have been
asked to support, Sprint will provide OC12c from Oakland hub to SC97 show
floor and ESnet will get an OC3 out of this.
FTEs: Mike Helm (directory services & CA services),
Marcy Kamps, Joe Metzger (news), Joe Ramus, Sue Smith, Allen Sturtevant;
contract people Craig Tenney (VCS/VCSS services), Don Varner; plus a summer
ESnet mail hubs nersc.gov & es.net split. Now have
199 mailing lists, spam filtering now available for lists (primarily for
ESnet news feed is alive again, not an ESnet wide newsreader
NIS group server machines: all new servers on 100 Mb Ethernet
switch, telnet & ftp disabled everywhere, clear text disabled for ssh
logins & Kerberized rlogin/telnet logins, 1 secure terminal server
deployed, 2 to go.
NISG high availability system on two dual cpu 300 MHz
Sun servers (which heartbeat one another), two dual connected Sun RAID
disk arrays (using Veritas & FirstWatch dynamic failover), supports
VCSS, web server, Oracle dB, ESnet site info, MOUs etc.
ESnet DCE servers upgraded 4 Sun Enterprise 1 servers,
2 Sun Enterprise 2 servers (dual CPUs) 2 Sun RAID arrays (FS file servers).
Primary ESnet Web server (HA) Netscape Enterprise v2.0a, Netscape Catalogue
ESnet distributed help desk, draft v1.0 white paper available
on ESCC private page. Needs work on clarification of concepts, clarification
of riles. Pilot version by next ESCC meeting. They are still evaluating
ESnet digital services: goal to seamlessly integrate audio,
video & digital technologies including: VCS/VCSS, Mbone/multicast,
Unicast, ISDN, ATM, packet-switched, A/V streaming, A/V library, record
on demand playback on demand, Web technologies. Looking at First Virtual
Corporation with a video storage server (ATM based), ATM-ISDN gateway,
and ATM switch. They also support MPEG1 for VHS.
VCS 40 port PictureTel ISDN video hub with a future ATM
They have a SGI workstation for Webforce, Cosmo, Kai's
Power tools, Adobe Photoshop / Illustrator / premier … for picture editing
Storage / transfer requirements: MPEG1 500kbps to 3Mbps
(typical 1.5Mbps), MPEG2 4-100Mbps, DVD ~ 10 Mbps, HDTV ~ 20Mbps, typically
10 Mbps. The broadcast industry is moving to MPEG2.
Collaboration Services Scheduler - Craig Tenney, ESnet
Started as a two-week projects, first internal beta release
was October 27, 1993, was a telnet interface based on NIC menuing system,
with a few hundred lines of Perl.
Now it is Web based, with a 40 port MCU, with automatic
conference setup and tear down, they have added the Mbone gateway, an online
Web based scheduler provides online reservations &
schedules with 20K lines of code, with an Oracle backend and provides automated
reservation, modification and cancellation plus daily & weekly schedules.
Automated conference setup/teardown is integrated with
PictureTel LiveScheduler (runs on PC running Unix). The setup starts 2
minutes before start of conference. Takedown is scheduled 5 minutes before
the end of the conference Directory numbers are provided online & in
email notices. The MCU autodials the Mbone gateway. Vic & vat start
ESnet to Mbone gateway runs on a DEC Alphastation with
Vic v2.8, Vat v4.0b2, a VGA & AV321 interface. The VGA goes to NTSC
to the VTEL & thus to ISDN cloud (can support up to 384kbps). Lose
quality from Mbone to room based video at the VGA to NTSC interface, they
are looking at an alternative.
The Help desk has a Remedy trouble ticket system. There
is a site registration system and a form for reporting problems.
Plans for the future include two mbone gateways, and looking
at the FNAL multi-session bridge. They are looking at encrypted versions
of vic & vat. They are also looking to port some of scheduling package
from Perl cgi scripts to Java. They are looking at the Latitude audio bridge
to allow phone conversations to be bridged in. There is a FAQ for the help
George Seweryniak asked for the statistics to report on
mbone usage, this is important for justifying the adding of more Mbone
gateways. The Mbone gateway is assigned as a room (resource) so utilization
will be available.
Van Jacobson is working on a floor control system for
videoconferences for moderating who talks. White Pine has a reflector that
is H323 compliant, but vat & vic are not H323 compliant so unclear
how they could be put together.
ORNL transitioning their security from all of Lockheed/Martin
at ORNL to just the Laboratory. This has delayed start up of the advanced
Hacker got into local Linux multi-user system PC at CEBAF.
The cracker installed a sniffer, got lots of passwords (not easy to detect
on a Linux PC), had to pull plug on Friday for 5 days, so they could change
passwords etc. Lesson learnt is that cannot tell users not to bring their
machines on site. They are making a load of recommendations as to how users
run their Unix PCs etc. For example they insist the machines use ssh on
site, and they must allow a login from a central site machine so it can
be checked for being in promiscuous mode, MD5 passwords have changed, or
there is something mysterious. At CEBAF they will not give out an IP address
until the central site has installed and checked the configuration of the
PC. Unclear how far they can push users. Users may not like the ssh terminal
emulator (e.g. key layout, or colors), so may resist and may require a
policy to impose. It appears one has to go through the pain and agony of
a break in before the community will accept the smaller amount of pain.
LANL has decided to partition its networks with gradations of security,
for users who come in from offsite they will be less trusted. ORNL is setting
up a more secure subnet for people who require increased security, which
has more stringent requirements to be allowed onto. The problem will get
worse when NT is multi-user. SLAC has tied it into phone pager system.
We could look into sharing spam-blocking addresses. This
could be part of the distributed help desk.
Futures - Bob Aiken
As usual Bob made this presentation at light speed, so
the notes below are fragmentary. Hopefully his transparencies when available
will be a big help.
Main goal now is to do research (as opposed to providing
increased speed) to advance the networking technologies. Networking engineering,
monitoring, QOS end-to-end to application (how to bid on resources needed
to provide a QOS, bidding requires security/authentication), data delivery,
security (surety of routing updates, nomadic/remote access, PKI, smart
net management, secure & fair access). Morphnet adopted by agencies
as a possible way to do both production and research. Will need distributed
Goal 2 is 10 sites at 1000x, e.g. HIPPI64, will require
new OS & end system architectures, WDM (to allow better utilization
of existing fiber). Also 100 sites at 100x. IPv4 minimum bearer service,
with IPv6 in future. ATM and others services as required (VPNs). Interconnections
will require GigaPOPs. Big concerns will be QOS will need good monitoring
to be able to show somebody got what they paid for.
NGI FY98 proposed $105M, DOD 10-40 (need 20 to break even),
NSF 10-23, DoE 0-0, NASA 10, NIST 5, NLM/NIH 5. Much of this is not new
money (but redirected)
Internet 2 is University program. Will use vBNS/MCI, get
NSF $. Internet 2 is production net oriented (e.g. beta test QOS). NGI
- aggressive integration of NET R&D and applications. NGI connections
peering policy supports program requirements, ESnet will not be a transit.
13M to universities, 2M for FedNet interconnection R&D, 4M for ultra
high-speed nets (e.g. NTON), 6M for Lab high-speed network access 4M for
applications. Senate markup not only not provided the $25M but also said
"is unnecessary for DOE to fund the development of enabling technologies
to meet its Internet requirements".
So no funding for Lab upgrades, ultra high speed nets,
interconnection R&D, connecting GigaPOPs except vBNS <> ESnet interconnects.
Primary focus on DOE mission, will keep vBNS <> ESnet interconnections
for access to DOE facilities, will peer with states & GigaPOPs ONLY
when cost effective & mission requires it. Will keep DOE affiliated
universities on ESnet when they show the requirements as well as a letter
from the Dean / Provost (e.g. MIT, UCLA, Caltech …). Will continue informational
coordination through meetings like this & JET.
Networked challenged applications is a partnering opportunity
with MICS. Establish a small number of testbeds for ER research applications
that require advances in network & security research and are willing
t adopt these new technologies while they are still experiment al evolutionary
& in nature
Storage, visualization, retrieval of large data sets.
Interactive steering of experiments. Congestion control,
MICSW will fund net & security R&D & limited
deployment, possibly enhanced connectivity, assure appropriate access of
the applications to new network capabilities, funding 0.5 \to 2M
Provide a network challenged application willing to be
tolerant of less than production networking.
Benefit applications are afforded the opportunity to live
in the future
Next steps consider opportunities, send white paper by
Possible R&D includes data & control channels,
QOS, CBQ, ATM, RSVP, security, Morphnet & active nets.
We may need a debriefing on why the DOE proposal was not
acceptable. The next round in 1999 will be different, will depend on how
initial NGI partners do.
Based Applications - Dave Dowty, Christopher Newport University
Web centric application for collaborations. It is designed
to be simple, intuitive and extensible. Works on PCs & Sun's HotJava.
Not been fully qualified on WNT yet. Macs are Java challenged, they are
behind. Web-4M has a POP email client, calendar, bulletin board, plus chat
rooms with whiteboard. Can cut and paste between applications, can enclose
whiteboard stuff in email or save in document library etc. Can have private
rooms, private conversations. There is a lot of security, supports Ssh,
but does not yet do end-to-end encryption (awaits Java support).
Browsable document library can be looked at easily from
Web browser, supports gif, jpg, txt, html, mpeg. Simply drop browsable
document into Web browser.
Supports an interactive slide show. One person can control
the slide show many others can follow the show. Do not have real time streaming
audio yet, they do have some audio support.
Group ware can be expensive to support and admin. For
100 total user licenses it is $3500, which includes the server, for 25
users it is $1200. You can run multiple servers. Support for other Unix
clients: they expect that it already works, but have not qualified it yet,
it needs JDK 1.1 compatibility. IBM & HP have JDK 1.1 compatibility.
Netscape support requires new (imminent) release of Netscape.
Coordinated Browsing System - Mohammed Zubair, Old Dominion University
Want a group of users to be able to surf any web site
with no new software. User has to register herself so can surf. This causes
the user to download an applet that establishes a connection between the
client and application server. Then set up proxy server for all clients
that send the requests to the central registration server that then tells
the applet(s) to download the web page. There are nasty details to do with
making sure one gets all the objects for a given page. At the moment they
do not synchronize scrolling, they only synchronize the page retrievals.
They plan to add audio support. One target is to allow help desk to have
a similar view as the user; another could be for education.