ne SCIC Meeting FNAL. April 15/16, 1999

SCIC Meeting, FNAL 4/15/99

Rough Notes by Les Cottrell

Introduction

Everybody introduced themselves. There is a Web site linked from the ICFA directorate page and from the HEPIC/HEPNRC page (http://www.hep.net/ICFA/index.html).  There is a mailing list which is open for submissions for the committee members. There will be a second list for affiliates. 

The ICFA-NTF made a final report to ICFA in summer 1998 in Vancouver. As a result a Standing Committee on Inter-regional Connectivity (SCIC) was created. Charge "Make recommendations to ICFA concerning connectivity between America, Asia and Asia".

Review of Charge - Matthias Kasemann

Matthias made a summarization of the ICFA meeting in Vancouver 1998, this included a charge to the SCIC. Following this there was a discussion of the charge. One issue was the need to add something on creating a credible model of costs. Another issue was to broaden the charge from "review forecasts of future bandwidth needs" to be more general in terms of "network needs".  It was pointed out that there was no representation form the UK which has traditionally had one of the worst transatlantic connectivities.

Report from ICFA-NTF - David Williams

David covered the history of the ICFA-NTF and the report it produced. The ICFA community continues to push Internet services beyond what can be easily afforded. This is due to collaborative nature of HEP which requires day-to-day interactions of people worldwide. The data volumes are really very big (Pbytes/year). The need for fully automated & widely-distributed data processing and data analysis environments that we are setting up are un-precedented in scale.

For monitoring David showed the inter-continental packet losses to show within continent performance is pretty good (< 1% loss), but inter-continent is typically poor. He also showed the impact of Brown changing its ISP.

Possible approaches were to either: build our own Intranet, fix individual congested links, or just leave it as is (the latter leads to wastage of collaborators time). Rays of hope are: bandwidth costs are coming down, there is a greater understanding of the performance of the Internet.

QoS tends to mean ATM facilities today, has a limited area of applicability, it cannot eliminate the need for adequate bandwidth  ("any link loaded > 80% for hours at a time" cannot provide satisfactory performance  but for many of our pairs the utilization is > 80% for most of the working day) and it is needed for end-to-end.

The recognition of the importance of this activity and SCIC came heavily from Wiik & Richter.

New happenings: quick move to TEN-155 in Europe, most national nets in Europe deploy or plan to deploy 155 or faster backbone, in U.S. I2 is becoming a reality with 622Mbps & 2 Gbps backbones, QoS (still) coming along.

Conclusions: work on GigaPoPs, track technology, follow QoS.

There was a discussion on the reality of QoS today,  how to administer it, and how to take advantage of it for HEP applications (e.g. look at a bulk data transfer service to be done at low utilization periods, move away from applications that demand low latency such as telnet).

Status & future directions of US research networks - Phil DeMar

General observations:

Internet 2 (I2):

A project by a consortium of universities, not a network, not a Federal government project. The network is the vBNS (NSF funded) pre I2 backbone, Abilene is the UCAID-sponsored I2 backbone.

UCAID has 140 members, membership limited to US research universities, US National Labs are not members of UCAID comes via ESnet.

AUP prohibits transit traffic between affiliates, i.e. ESnet site to other affiliated network site via I2 is not allowed (ESnet to I2-member is OK). AUP also prohibits commodity traffic (so university will need to maintain a  non I2 path).

I2 model envisions GigaPoPs to ease AUP issues. The GigaPoP is a regional aggregation of networks formed by I2 universities to connect to high performance networks (e.g. MREN, CalREN2). They are designed/managed by local GigaPoP consortium. The architectures vary widely (MREN is a centralized ATM switch, CalREN2 is 2 SONET rings. They interconnect to the I2 backbone, other research nets (e.g. ESnet) and commodity ISPs.

NGI:

NGIXes:

ESnet:

vBNS:

Abilene:

International connectivity:

Crystal ball:

Summary:

Canada site report - Dean Karlen

CA*net II upgrading to CA*net 3 which is a DWDM built on 2*OC-48. The new OC3 between the UK & US appears to have helped (halved loss rate to about 7%), Germany got better over Xmas as the trans Atlantic link was upgraded from 90MBps to 155Mbps, CERN looks OK, Italy is improving. 

There are trans Atlantic performance problems which need to be addressed and will need international cooperation so there should be an ICFA recommendation to improve this in particular between Canada & DESY.

SLAC site report - Richard Mount

Big driver is the BaBar turnon.

E10K going from 24 to 64 processors, ~ 250 Ultra Sparcs, 7TB of staging disk, 6*Silos.

Upgrading networking for core & farms (e.g. 2*8500 routers, 6*6500 switches).

ESnet connection moving from 45Mbps to 155Mbps.

Pulling 96 fibers to Stanford. Will enable NTON connectivity (OC12-OC48 for W. Coast) and I2/CalREN2.

Software challenges include Pbyte Object oriented dB and moving to a distributed Pbyte dB, even if only air freight the bulk of data.

Other WAN related projects:

France site report - Denis Linglin

See http://www.renater.fr/international/NTI.html

France accesses TEN-155 through CERN. Main problem in France (like elsewhere is that as soon as open a new link the bottleneck moves), it is now on the 2Mbps link between CERN & Lyon ($500K/year for 34MBps between France & Switzerland for a dedicated link used by HEP). There are 2 nets, IN2P3 & Renater. They interconnect. The IN2P3 network is being dropped apart from the links to the US. Labs around Paris go through Renater to get to IN2P3 to get to CERN & the CERN-US international link. The OC3 Renater link to the US is saturated.

Renater 2 is mainly a star topology based on OC3 centered on Paris, based on dedicated lines. France Telecom provides the links but not the PoPs. IN2P3 will be allowed to build its own virtual private network (PHYNET 2) within the Renater network.

S. America / Brazil site report - Sergio Novaes (IFT/Sao Paolo)

HEP introduced BITNET into Brazil in 80's. Today there is no HEP specific network. There are 3 academic nets: RNP a connecting backbone funded by Science & Technology ministry, Sao Paolo & Rio. There used to be US  international links to FNAL and now have via MCI (Sao Paolo 2Mbps), CERFnet (Rio 2Mbps) & IBM. They plan to upgrade the international links from 2Mbps to 155Mbps. The Brazil backbone for Rio will move to 155Mpbs this month. Internally in Brazil there are links up to 10Mbps, but most are much smaller. There is a 2Mbps link between Rio and Sao Paulo. The Internet connection between Rio & MCI (international link) at 2Mbps is saturated apart from about 5 hours/day. There is no special bandwidth for A&R. Brazil has about 117K nodes (180M people), US is 20M, Canada 800K, Columbia 10K, Argentina (25M people) 20K nodes. Probably there are about 400 HEP theoreticians and experimentalists in S America.  Brazil growth of nodes growing 700% in 2 years, now 50% .com.br. There is an Internet 2 with $5.5M with 12 MAN projects.

The needs are to increase bandwidth. Need A&R dedicated links, even better if had just HEP links. For a D0 farm they need 20kbps/PC or for 30 PC need 600kbps (total available on US link is 2000kbps). The main HEP partners are CERN & FNAL. Unclear how much fiber/infrastructure is being put in place to the US. David says there are at least 2 international consortia Oxygen, Global Crossing are going into Brazil. The major problems are money. The telecomm industry is being privatized.

Germany - Michael Ernst

DFN has a 34Mbps to Perryman an 155Mbps to Telehouse. There is no link to STARTAP. They are connected to the TEN-1555, TEN-155 has a 2Mbps connection to Japan.

Within Germany the DFN has 150 research institutions. It is a VPN cut out of the national telecomm network.  They are carrying 70TB/month. The DFN link is congested. The TEN15588 is OK.

Russia connection via satellite is heavily congested, and have no permission to run dish in Moscow. ITEP have gone over to using the Russian University Network (RUN) which improved the quality of connection to ITEP. Other sites in Moscow are not as lucky. There is a lot of potential to use this HEP community in Russia if they had better connections.

Europe connectivity improved with TEN-155, expect an upgrade of A&R net in Germany next year. Problem areas: N. America (2*OC3 won't help much); Russia now & Japan in future. DFN now does ICMP traffic shaping especially at International exchanges. At DESY structured wiring with CAT7 cable, switched 10/100baseT, GE based backbone, prepare routers/switches for multicast applications. Hoping to be able to use DiffServ to provide managed bandwidth for improved performance to HEP sites in N. America.

CERN site - Manuel Delfino

Slides prepared by Olivier Martin.

Campus has structured cabling (CAT5) essentially everywhere apart from a few experimental areas. Phasing out FDDI. Phasing in GE. Requests for 100BaseT to offices, fully routed backbone almost complete, they have a firewall which has been upgraded.

European connectivity: to Europe by TEN-155 at 155Mbps hared with SWITCH (1/4). A non-negligible fraction of  CERN's Internet traffic is flowing directly though the CIXP (e.g. to France). They have selected a reasonably priced backup Global Internet provider (Carrier1) for the CIXP.

For US leads a consortium with US & Canadian IN2P3 & WHO with 2*E1 to US via SwissCom & MCI) has been upgraded to 12Mbps (Apr 1/99) and 20Mbps (Oct 1/99) via C&W (ATM VBR-nrt). MCI/Perryman PoP has been relocated to C&W Chicago. An STM-1 connection to the STARTAP has been established.

CIXP (CERN Internet eXchange Point) has SWITCH (Swiss Academic & Research Network), RENATER, IN2P3, TEN-155, various ATM networks (SwissWAN, Geneva-MAN/TxP), 15 commercial ISPs, redundant SwissCom 622 Mbps SDH local loop installed, France Telecom fiber installed, new telecom operators DiaX/SIG, SUNRISE, Carrier1.

Issues: how to join UCAID/I2, peering arrangements paperwork, Japan-CERN via STARTAP, Mexico & S. America, good choice of commercial ISPs at CERN thanks to CIXP, Russian problems.

Poland about to connect to TEN-155.

US - Harvey Newman

See: http://l3www.cern.ch/~newman/icfantf/icfascic_499.ppt

Japan - Yukio Karita

See http://www.kek.jp/~karita/icfascic-1.ppt

vBNS is reluctant to peer with Japan HENP at STAR-TAP

NACSIS - Europe line is saturated, Yukio does not know when it will be upgraded, it was promised in May 99, but Yukio is not optimistic.

Overview & Future directions of Internet Monitoring in HEP - Les Cottrell

See: http://www.slac.stanford.edu/grp/scs/talk/scic-apr99/

Remarks:

Action items:

SCIC Wrap-up

The next ICFA meetings are Summer Aug. @ SLAC as part of Lepton Photon conference, Nov. @ FNAL, Feb. 10,11 2000 at RAL.

Requirements Analysis Working Group (Harvey Newman & Matthias Kasemann)

Technology, status, cost & development/expectations for HEP (Richard Mount, Michael Ernst, David Williams)

Monitoring:

Report on Survey & tracking of HEP network aware applications, QoS, prototyping work

Remote regions

Need small focussed task force to address particularly bad performance areas:

Reports to ICFA