ESnet International Meeting, Kyoto,  July 23-24, 2000

Authors: Les Cottrell. Created: July 24, 2000

XIWT | IEPM | PingER | Tutorial

See http://ccwww.kek.jp/ESI2000/ for the presentations

Page Contents

DANTE Canada DESY Issues NII
DFN DoE KEK/Japan STAR TAP Data Grids
INFN/GARR ESnet ITER ESnet research Discussions
UK CERN NSF  Internet 2  

DANTE - Dai Davies

The current network is TEN-155, this is in its second year. The core is 622Mbps in 5 locations and the access speed is 622Mbps capable. The subscribed bandwidth is 2.5Gbps. Connections to the US are to both ESnet and Abilene, also connected to Israel.

The next project is called Geant. Planning is significantly underway. The consortium is the same as for TEN-155, objective is to obtain Gbit connections, and to extend the geographic coverage in particular Eastern Europe (Balkans, and Baltics (Bulgaria, Romania), and Slovakia) but not at Gbit speed. Also want to rationalize the global access. Also want to provide/emphasize  guaranteed QoS.

Expect offers by end September, want direct access to fiber capacity at 6-10 core locations and extend to 20 locations within 4 years, initial speeds are expected to be 2.5Gbps and possible 10Gbps. The direct access to fiber is a new challenge operationally and has regulatory implications. Today use ATM for QoS, it works. Major challenge is to move from ATM to using IP directly for QoS. There is an increase in operating the network rather than relying on provider.

Transatlantic prices have gone from 101KE/Mbps/yr (1998), 18 (1999), 2.7 (2000), and capacity has gone from 34Mbps to 180Mbps to 450Mbps.

They plan to try and rationalize the research connections from Europe as a whole with other world regions: N. America, Asia-Pacific, S. America, ... Connections may be activated at any PoP of the core and this will be transparent access to/from any other PoP.

DFN - Michael Ernst

G-WIN is the next generation of German national network infrastructure. It is based on WDM/SDH for IP only, will lease dark fiber, will provide a point-to-point service with flexible bandwidth provision.  Will ofer 60*34M point-to-pointy available and switchable at any time, larger quantities can be provided within 6 weeks. Similar architecture to US GigaPoP architecture.  Services will be 0.128 up to 2.4Gbps for Internat, for point to point will offer 2 or 34Mbps. CoS/QoS is urgently needed for a lot of applications, and a definition of servives and organizational concepts. DFN-Internet gives worldwide connectivity. Deutsche Telecomm won the bid for the backbone, routers from Cisco. Throughput main trhruput hour was 1.28HGbps (1999), 2,8 (2000), .. 38Gbps (2003). Current 200TBytes/month.

The migration from B-WiN started in June 2000, To US have 2*OC12 into 25 Broadway (Telehouse), but capacity to TEN-155 is currently 2*OC3 later this year upgrade to OC-12, access to Abilene via Hudson. They have 64kbps from DFN to China which is heavily used. The G-WiN topology is mainly 622Mbps links.

INFN/GARR - Enzo Valente

30 INFN departments or Labs connected @ 34-155Mbps (ATM), physiaclly connected through GARR, with continental connectivity provided by TEN-155 and intercontinental @622Mbps provided by GARR (now). For commodity Internet there is another link, also for non Abilene university sites. 

There is ongoing usual activity for European Labs. Present activities are preparation for LHC upgrading, also to support the BaBar computing regional center in Rome.

GARR will upgrade to Gbit network in one year (Dec-01) with GEANT architecture. There will be QoS or bandwidth allocation (no more ATM?)., there is also a request for setting up VPNs in particular for museums.

Big issues are access to non Abilene universities (very poor connectivity since competing with commodity Internet traffic), reserved or very high bandwidth (e.g. Abilene works well since only ~ 5% utilization), and the effect of the GRID is unknown (how much bandwidth will be used is very unclear, both in estimates and understanding the matrix of flows.

Access to ESnet has been very painful, with lots of delays which were very costly to INFN. The problems are particularly serious for BaBar.

UK - Hughes-Jones

Networking for HEP in UK is represented to PPNCG.  PPNCG includes HEP and astronomy, monitor end  to end performance, investigate new applications/technologies, provide advice on kit/facilities, use ICFA tools. SuperJANET core is 155Mbps. 2*155Mbps lines to US, 3rd in use 18 July, 4th planned soon. Connect to CANARIE, ESnet (at Hudson) Traffic from US is factor 3 greater. Losses were bad showed 40% March 1999, 20% Oct-99, now < 2%. Losses to CERN and DESY are very small. So right now users are happy. The impact of CAR/WRED appears to be a factor of 2 in loss. More bandwidth is a bigger help. 

Government has discussed 165M pounds to develaop computing grid infrsatructure, not clear how it will be split across rersearch councils, not just for HEP. Setting up management structure to coordinate PPARC activities. Kenn interest in middleware nitiative being led by CERN which will be needed by HEP worl;dwide They have integrated PAW with Globus.

Throughput for 8 FTP steams is reduce/stream by 0.8.

Contract for SuperJANET4 to WorldCom to be at 2.%Gbps upgrade to 10Gbps in 2001.  Routers are out for tender (Cisco GSR or Juniper G40). All sites to be up by March 2001 (Manc-ULCC by end Dec 00). 

Canada - Dean Karlen

Quality of international connections has improved. They use PingER with data from Sept 1998 in particular from TRIUMF Beacon site. Seeing < 1% loss. To US universities things look good (but mainly on Abilene). Canada to UK now looks good., to DFN priority traffic improved but only acceptable at the moment, CERN is excellent. To Japan it is poor, but improved in March 2000 when got improved peering for Carleton.

CA*net2 is a 2nd gen. ATM network for R&E operational since 1998. Transition to CA*net3 is in progress currently POS, with OC-48 (2 lambda of 16 channel DWDM), will upgrade to OC-192 (8 lamda), funded until July 2002. 

CA*net4 plans: onnect R&E dark fiber nets with typical $30K institution costs and a 20 year lifetime. IP over DWDM and routing with wavelengths under control of the customer, still in planning stages .. stay tuned. Interesting direction of moving to building own networks.

ESnet Update DOE Perspective - George Seweryniak

Program scientific discovery through advanced computing plan submitted to congress 24 March 2000, laid out an aggressive plan to build an infrastructure, got $20M. 

There is a security plan, strategic plan, program plan, progress report, independent periodic reviews, there is an ESnet WW information site http://www.es.net, performance monitoring http://www-iepm.slac.stanford.edu/ (very important to show how well things are going), interagency coordination. Funding for ESnet has increased (in particular have doubled international spending in last 2 years). 

Security has become a big topic. Some labs have disconnected from Internet & ESnet, there are weak access controls that jeopardize systems, e.g. weak  perimeter defenses. ESnet is responsible for backbone, users reposnible for sites. DoE is taking steps to counter risks, updated policy, security plans, review processes, some labs have strengthened counter measures. 

7 year (3+2+2) contract signed with QWEST 20 Dec 1999. Has fast track (2x per year) & accelerated track requirements. 

There are interagency activities under the Large Scale Networks groups. 

Lessons learned from testbed: need to involve many parties, security conflicts with use, need to tune stacks.
Goals for coming year increased connectivity to Universities/Europe & Asia, increase International bandwidth, QoS services. Issues include looking at future of ATM and rapid SONET deployment, network research/middleware, increased security, future advance high end applications impact, emphasis needed on performance measurement.

ESnet 5 - Jim Leighton

Current net is still ATM core based.  International connections at NY, STAR-TAP and W. Coast. Passed 30TB/month.

QWEST may overlap with Sprint over 2 years, expect to complete most in year 2000. Provides high-performance testbed, research collaboration and advances services for the production network. OC48 to SNV hub and to LBNL. Working on direct connection to KEK in SNV. 

Japan: JAERI is congested at 1.Mbps and seeing heavy packet loss. NIFS at 256kbps is also congested. KEK (10Mbps) goes via Chicago and is not congested (40-60%). 

CERN-ESnet heavy peaks for long periods.  CA*ESnet is growing but not congested.

DFN/INFN/DANTE come into Telehouse 25 Broadway. It has a 50Mbps link to ESnet hub at 60 Hudson. JANET /SURFnet/NORDUnet/JAnet is at OC3 at Hudson also OC12 to QWest. In August will move MIT/BNL/PPPL off Sprint to QWEST (MIT T3, BNL OC3...)

Aggregate traffic to Europe is ~ 6Mbps  (much higher out to Europe).

In response to a question about the need to improve SLAC's connectivity and the need to provide upgrades more quickly, Jim noted the problem. 

CERN - Olivier Martin

They are upgrading from a 20Mbps from C&W to as 45Mbps to a KPN/QWest circuit.  CERN is connected to STAR-TAP, CIXP, TEN-155, IN2P3. The new contract gave big price performance improvements and a budget increase of 20-25% may make possible to move to 4*STM-1 for Us links faster than originally planned but only makes sense if there are real prospects to make effective use of the capacity end to end which is far from being the case today. Their main concerns are for accessing SLAC & FNAL. there is a question of where to land in US, NY or Chicago. NY pros has direct peering with Abilene, Canarie & ESnet, availability of dual unprotected SDH circuits, needs STARtap international transit network. Chicago has direct peering with FNAL.

Very high speed file transfer assumes high performance switched LAN (requires time and money), and there is a high performance WAN (this requires money but is possible, and requires careful engineering). Problem is to achieve high thruput on long distance links (high bandwidth*delay paths), new protocols may be needed (e.g. skyX), or ad-hoc solutions (e.g. TCP relays).

DESY - Michael Ernst

DFN transatlantic (TA) upgrade mid Oct '99 from 155Mbps to 4*155Mbps (ATM, PoS), distributed connections to BWiN), DFN PoP in US moved to NY/Telehouse. Hanover, Munich, Leipzig, Koln to US. OC3 link to TEN-155 is oversubscribed.  DESY monthly traffic has heavy growthdoubled Jan-00 to Jun-00. Connectivity to US users are suffering from overloaded links. The dedicated PVC has been turned off for technical reasons (incompatible with DFN QoS pilot).  Most traffic from W. to Germany (20% other direction). Connectivity to CA*net goes via commercial provider at one point and is bad (DANTE to ALTERnet). Working with DANTE to resolve (i.e. to rtoute DANTE direct to CA*net.  Since last Thursday upgraded from 4*155 to 2* 622Mbps which has improved things, but not to Canada. Still need reserved bandwidth. DFN will introduce CoS/QoS plan to start with 4 classes and HEP will get one. European link to TEN-155 is also overloaded (but flows in and out are more symmetric). Still a major challenge for providers to get new links in place on time. Connectivity to Japan is a major improvement can get 2GByte per day @ > 100KBytes/sec. May try and get a managed bandwidth service to Japan.

Russia & FSU uses civil satellite for DESY/HEP traffic funded by DESY ($6K/month). Satellite link DESY - FSU Novosibirskm Aremnia, Georgia, NATO infrastructure grant group, fiber is on the horizon.

DESY done major upgrade on LAN using structured wiring with > 100km UTP cat 5 and > 35000 ports using Cisco (21 * 6500 chasses)  technology

KEK - Fukuko Yuasa 

They have good connctivity to US universities, Europe labs. They need higher performance to Asia Pacific region. the other concern is security.

KEK-CERN ATM/PVC 4Mbps in Feb 2000, KEK - IMnet ATM/PVC 135Mbps in Mar 2000, KEK - Academica Sinica Taiwan (SInet) 128Kbps - 1.5Mbps started in Jun 2000. 

KEK BINP MSU extend Moscow routing to Moscow area in Sep 1999, 2Mbps fiber BINP and MSU, carried traffic to/from ITEP. KEK - Academica Sinica. Consider upgrade of PVC to IHEP China being considered but cost of Chinese end is a concern. Link to China saturated.

HEPnet-J has typical links of 12Mbps or 2 Mbps.

Inter Ministry (IMnet) research information network started in Dec-1994. Coordinated by JST  http://www.imnet.ad.jp  provides backup.

APAN started in June 1997 http://www.apan.net objective to promote regional collaboration and building regional hubs very useful to ASIA HENP.

She showed lots of PingER plots of performance between KEK and FNAL, SLAC, Wisconsin, ITEP. 

Testbed between Japan and Europe for telemedicine and IPv6. Includes CERN, KEK, MONARC, H.323 DV over IP, data grid, IPv6.  Japan Gigabit Network (JGN) http://www.jgn.tao.go.jp/ started in 1998 and is a 5 year program

Concerns include security how does one manage high security and high performance, do we need a HEP VPN. A 2nd concern is a higher quality network to Asia-Pacific region. 

Update on ITER - Casci

There is a new design with reduced parameters. EU, Japan and Russian federations are doing the design. EU, Japan and Canada are expected to offer sites in Spring 2001. 

The data currently exchanged between the joint work sites has stayed constant, the normal evolution of the links between Europe & US/Japan was enough to support the project. Garching uses WIN backbone. Average 250kbps available is adequate. The future site requirements will be to support the NAKA and Garching remote sites plus the ITER site. It will be a big collaboration with remote analysis, diagnostics and general remote participation. Unlikely due to safety issues (they use tritium) to need remote control of the machine.

To get experience of remote requirements they are using JET. A 2Mbps line from JET to JANet is currently in use, this also has connectivity to TEN-155. First tests are very positive and are providing valuable input for "remote participation" for ITER.

NSF - George Strawn

PITAC (Presidential Information Technical Advisory Committee) introduced an influential report, that resulted in a proposal to congress to provide a fundamental increase in funding for IT research. Went $180M to $270M in 1999 to 2000, also hope for another $100M next year. The major focii are software, scalable information infrastructure, high end computing, social impacts. This will require the science of computing to be propelled forward. There was a solicitation last fall, being dealt with now. Received 2000 proposals requesting $300M. Whittle down to $190M, up to announce exciting projects under ITR banner by this fall. Hope for more money next year. The most relevant to networking is the scalable information infrastructure. We are moving towards a billion node Internet, but can only simulate a million nodes.

Advanced networking infrastructure initiatives at NSF. For last 4-5 years, have been supporting STAR-tap and international connections with 15% of budget extends to 2002.. Bulk of money is in supporting domestic backbone and encouraging universities to connect to it. In last few years was $10M per year for vBNS backbone and $20M for connections. Now vBNS is not supported by NSF, MCI will continue support at no cost. Made 170 awards to universities to connect to vBNS or Abilene. Also a program to support middleware, measurement under Internet technology program. 

PITAC had 6 workshops to make future recommendations. The recommendations are to move up the protocol stack, i.e. middleware and applications. ITR interested in scientific and engineering applications which will make use of networks only just available. Need to be of merit to the proposing disciplines/sciences so want joint funding. They  will give direct support to the science as well as the networking. Will also be open to international, national and regional and local infrastructure requirements of the proposals.

Middleware is an important activity and see much global activity working on it. But industrial colleagues are suggesting that middleware will be handled by industry.

NSF networking infrastructure support is moving more towards scientific community interest support and away from broad academic community support (more like DoE and NASA). PITAC also recommended that NSF get more involved at looking at how to serve the scientific community in the long term (20-30 years).

George Strawn is very interested in the moves into dark fiber. Wants to help dark fiber industry to develop in order to reduce the costs by large amounts. 

Issues - Larry Price

STAR TAP - Linda Winkler

Provided a persistent point for international connectivity for the US. Not meant to be the only point of connnection. there are about 14 international connections. CERNET (China) will be up later this year, will initially be 2Mbps going to 10Mbps.

Initially was a layer 2 connect point, sites brought in their ATMs and bi-laterally peered with whom they needed two. This requires considerable skills, so asked to put in IPv4 routing (i.e. a STAR TAP provided router), also provide IPv6 routing. There has been Diffserv experiments with DoE EMERGE and some international links. They have many measurement machines and support OC3MON, NLANR AMP. Also support an NLANR web cache. 

STAR TAP  (ST) is AUP free accept routes & traffic from any participating NRN (excludes commercial/commodity traffic & routes). Configured to avoid accidental use of ST ITN (International Transit Network) for inter-continental/inter-regional routing (eg. Singaren <-> APAN via ST ITN). Sometimes inter-regional connectivity via ST is provided.

They are developing international peering with sites in Seattle, LA, NY & Miami as well as Chicago. The distributed ST allowed transit from JANet and DANTE via NY to Chicago to Yokohama for INET. Hope to provide such transit on a persistent basis. 

ESnet Research - Jim Leighton

ESnet is participating in the DOE Science Grid (DSG) and implementation of the "Grid" concept within the DOE unclassified area. Will provide Digital Collaboration Services (DCS), DSG networking Testbed, network bandwidth management, X.509 CA & CS, Grid directory service, networking research (QoS, MPLS).

Testbed is to provide a persistent WAN test bed. Allow testing & research without jeopardizing the production network operation, serving as a staging area before putting into production.

Bandwidth management vision is to allow power users to reserve large portions of available bandwidth for fixed periods of time. ESnet is developing a simplified reservation agent and bandwidth. Initial assumption is taht "gonzo" reservations are sparse (i.e. few authorized users, few runs, few hours/run). Many complicated issues including authentication, authorization, accounting, it is difficult to allocate a distributed service, it is difficult to reserve a distributed service (may predefine paths to match reservations), reserving MPLS paths may be an initial answer. The network will provide poor(er) service to other users at run time (is this acceptable).

They are looking at doing a CA & CS, with the vision that ESnet wouldf participate in the establishment of teh infrastructure needed to build  a production-quality DSG-PKI.

ESnet is looking at a bandwidth reservation agent. They want to provide a petite (low bandwidth) QoS where they assign a fixed relatively small level of QoS traffic to each major site (e.g. for VC & VoIP). They will probably use MPLS where the source can now select what path to use.

Internet 2 - Heather Boyles

180+ universities (not all connected today), 60+ corporations, 40+ other research organizations, 30+ MoU partners (e.g. DANTE, DFN, APAN). Institutions joining Abilene commit to continued connection via a separate path to the Internet for common traffic. Contemplating doing transit between network peers.

150 institutions participating on Abilene (some are corporate research Labs etc.), and 33 on vBNS. 

There are still some routing problems that many universities are reached via Abilene but the reverse path goes via the commercial Internet.

The activities include applications (discipline specific coordination), middleware (core middleware ... identifiers (eduperson), authentication, authorization, directories), network technologies (Qbone, IPv6, multicast), network infrastructure, partnerships/tech. transfer (international, industry, rest of education).

Miami may become a connection point to some S. American networks. AMpath is working with Florida International U to provide a S. America crossing including Brazil (Rio & Sao Paulo/Santos), Argentina, Columbia, Chile. The Global Crossing cables come up Atlantic side 4Q this year, Pacific 2Q 01, will then need connections from local networks to the "Telehouse", the country will have 43Mbps to Miami. The Global Crossing commitment is for 3 years. Other cable providers are building out to S. America..

Global Crossing also puts down in Panama, Peru, Virgin Islands, Venezuela,  & Ecuador.

NII

This is the follow on to NACSIS. They provide networking for research and education in Japan. Also provide international connectivity to ESnet and elsewhere. Bandwidth to US is 30Mbps maximum and 10Mbps in general. Have 50Mbps to London/DANTE. Telehouse London is full, so moving to London dockland area. Bandwidth to San Jose is 40Mbps shared with Abilene. Abilene connect on W. Coast goes from San Jose to LA. ESnet still connects at Chicago. Awaiting direct connection to ESnet in San Jose.

Data Grids in Physics and Astronomy- Harvey Newman

BaBar 100TB in 2000, ~10PB by 2005, ~100PB by 2010. LHC is about 100Mbytes/sec from 3rd level trigger.

Discussions

The success of BaBar in using the network to avoid tape copying was very exciting to many participants, both in its immediate impact on the network and potential (e.g. with FNAL RUN II and LHC) future network impacts. The word was that apart from bulk raw data tapes are dead. I had a discussion with Harvey concerning the impact of SLAC (to IN2P3) on the CERN international link, and he put together some planning figures for future upgrade requests for CRTN US link.

I met with Richard Hughes-Jones of Manchester to discuss the components involved in RTT delays including PCI bus etc. I have a paper on his findings. We also discussed our mutual desires to collaborate on a UK funded QoS proposal between SLAC and Daresbury. This proposal is being put forward by Robin Tasker and Paul Kummer of DL.

A issue of increasing interest is the need for improved performance to countries with ESnet interest outside N. America, W. Europe, Japan. 

Issues: other regions

Improved connectivity needed to:

And who else:

Who has poor connectivity? We have measurements.
Which countries are asking for help?
What can/should we do about improving performance?

Who should we enlist to help:

The table below shows the HEP countries (outside N. America, W. Europe, Japan, Singapore) listed in the Particle Physics Data Group (PDG) diary sorted by the number of institutions per country and the number of collaboration sites per country from major HENP and Fusion experiments.


[ Feedback ]