ICFA/SCIC meeting, CERN Nov 1999

Authors: Les Cottrell. Created: November 13, 1999

ICFA | ICFA/SCIC | IEPM/PingER

Page Contents

Introduction France DESY
Italy UK CERN
Canada WAN throughput & video Report to ICFA
QoS on NRENs Japan  

Introduction

This was a meeting of the International Committee on Future Accelerators (ICFA) Standing Committee on Inter-regional Connectivity (SCIC) committee held at CERN on November 13, 1999. In attendance were Harvey Newman, Yukio Karita, Michael Ernst, Manuel Delfino, Denis Linglin, Les Cottrell, Matthias Kasemann & Richard Hughes-Jones. Dean Karlen attended by video conference from Canada.

There have been 2 presentations from the SCIC, one at the Lepton Photon conference. the 2nd presentation was at FNAL at a meeting on future accelerators. 

Action items are identified in italics.

Italian situation - Federico Ruggieri INFN/Italy/Bari

Italy has a 155Mbps backbone between the core (Milan, Bologna, Rome & Naples) and the major PoPs. They have ~200 sites, most universities and some major laboratories including space agencies, etc. The leaf links are 34Mbps. Have 45Mbps from Naples to NY. They have a 2nd 45 Mbps ATM from Milan via Dante to NY, plus 155Mbps to TEN-155. There is still a 1.5Mbps (v. expensive) from Bologna.

TEN-155 has a new 10 Mbps to Cyprus, & a  new 10 Mbps to Japan via London. The INFN Milan link goes via Frankfurt/DFN to 25 Broadway/Telehouse  NY. TEN-155 to Europe is about 35Mbps of the 155Mbps. The general Internet traffic goes via Naples via UUnet. The DANTE link is peering with Abilene, they hope to get 20Mbps to ESnet. It requires ESnet to get colocation at 60 Hudson. The 1.5Mbps link from Bologna will be replaced when the ESnet link in NY gets completed. Nb DFN is still peering with ESnet at Perryman, this will go away when ESnet gets into Hudson. There are also concerns that ATM does not go above 622Mbps. We need to get some more information from ESnet on the plans and dates for connectivity in NY for the international connectivity. We need to get a better view of the future of ATM from the technologists (e.g. Lucent).

Canada - Dean Karlen

See ICFA-SCIC Meeting CERN, November 13, 1999 Canada Report by Dean Karlen / Carleton University.
Since July connectivity has stayed the same or improved. In late April Carleton joined CA*net2. Last couple of months have had Carleton internal problems that have increased loss to above 2%.  They think they have a handle on the problem. To US universities there has been a general improvement. A couple of sites had tremendous loss over a couple of months. Overall loss is 1% or less.  To UK things have improved form v. poor to bad to acceptable since August 1999. Connections between Canadian sites and DESY improved in August due to reserved bandwidth.  Now Dresden appears to be as good. DFN now has 4*155Mbps to US. For collaborators with DESY have placed a Linux host at CERN to allow login via CERN to DESY to avoid problems with the overloaded DFN Transatlantic link. Typically there are about 5 users/day, Dean will survey users to compare the use of the Linux/CERN connection with the more direct DFN link. Connectivity to CERN has been good, Japan is also good, Italy has improved from bad to poor.  Summary big improvements in the last year, still concerns with Canada- Germany, Italy rather poor but not a priority. Canada peers via STAR-TAP, except for Germany, Italy, UK and some others.

Quality of Service for National Research Networks - Les Cottrell

See http://www.slac.stanford.edu/grp/scs/net/talk/icfa-nov99.htm

Les covered the changes in the ICFA/PingER project since the last meeting, the current deployment (23 monitoring sites in 12 countries, > 2100 pairs,  > 500 remote hosts being monitored), results/analysis of the performance between and to the major regions (N. America, W. Europe, E. Europe, Russia, Middle East, E. Asia, S. America, and India). He then briefly covered the effect of loss, RTT and jitter on applications in particular on bulk data transfer and interactive applications and showed some results on availability. Then he went onto show what can be done to improve performance, and went over the current work plan.

In summary: 

The ICFA PingER monitoring effort already monitors about 55 countries and all but 12 of the countries in the Particle Data Group booklet. The un-monitored countries are CL, CU, CR, EG, ID, MXD0, PE, PKCMS, UY, VE, VN, and ZA. We (SCIC) need to decide what extra countries to monitor and provide appropriate sites and  site contacts for those countries. A contact for Pakistan was provided by Harvey. Sergio may be able to help for the S. American countries (CL, PE, UY, VE). A possible contact for Argentina is Lucas.Taylor@cern.ch  who is trying to set a link between an observatory in Argentina and FNAL . In addition we need to provide input to what countries/institutes need to be Beacon sites. Finally if there are sites that would like to become ICFA monitoring sites then a contact needs to be provided. Israel was mentioned as being such a site and Matthias will get the email for a contact. The email addresses contacts should be sent to cottrell@slac.stanford.edu and it would be helpful and more efficient if the referrer would approach the contact first to introduce the ICFA monitoring effort.

France - Denis Linglin

In France RENATER 1 to 2 migration has taken place between august and october, except for the Paris area (Xmas). As elsewhere, it is a backbone of ATM 155Mbps links. US connection is done via one 155 Mbps line (saturated), TEN-155 connection through 2 155Mbps links. HEP is not using these international connections since IN2P3 is in the US-CERN-IN2P3 consortium "USLIC" which has its own 20Mbps ATM link to Chicago and access to TEN-155 from CERN. Within French NRN RENATER, the VPN for HEP is following the deployment of RENATER-2 with ATM links, so our VPN is now working, except in the Paris area where RENATER-2 is not yet operational but should be before end of the year. In July, Denis  said: Lyon-CERN is going to be upgraded in september from 6 to 34 Mbps. Now that telecom have been deregulated, they have to go through the boring tender procedure. They started in september, and hope to finish it up in january and the line to be operational by end february. This is still the weak point of our system and affects QoS. 

From last June, a large fraction of the HEP US-French traffic originated from Babar. References: Map at http://www.renater.fr/Section-migrationR2.html with general site at http://www.renater.fr

UK - Richard Hughes-Jones

There is a PPNCG (Particle Physics Network Coordinating Group) with membership from HEP & astronomy.  The remit is to ensure the required net facilities to the comminuity, monitor end-to-end performance, investigate new technologies/applications, provide advice on kit/facilities. They do active network monitoring (ping, ftp, traceping), ICFA monitoring and report problems to UKERNA. There are regular meetings with UKERNA always invited (& often show up), and are recognized as a subject in JNUG (Janet National Users Group) & JISC (funding body). 

SuperJANET connects libraries, schools hospitals as well as higher education & labs. SuperJANET provides a 155Mbps backbone with gateways to the MANS that the sites connect with. 

There are 2*155 Mbps to US into Hudson (also NorduNet & NL - Abilene peers here also), 25 Broadway is where Italy, Germany & DANTE connect, KEK, CERN, France, Canada connect to STAR-TAP, Japan comes in via LBNL. They believe teh congestion to the US. Traffic grew by 20% in a month and is about 250Mbps utilization (hourly average). This is consitent with the loss increases shown in Cottrell's presentation. Most traffic is from US (4 times more). they also charge back for US traffic in order to put a restraint on things. Connectivity to DESY & CERN is OK.  To US the losses are bad enough that users are complaining. 

They are using traceping from John MacAllister. These indicate that the loss start at the Sprint link connection. Can pin-point fairly well by using from both ends and see where the onset starts in both cases. The congestion follows the US working day. 

There is a lot of interest in video conferencing. They had a UK sponsored pilot (UKERNA PIPVIC2 project). They also looked at MBONE/IETF tools, most usages are multicast.  HEP users want to use tools but have been frustrated with setting up and using. They have been doing H.320 ISDN tests for group facilities. Data sharing with NetMeeting has also been successful. He showed packet utilization as a function of clapping, rubbing eye, reducung lighting (e.g. sun coming out). Packet size distributions show peaks at the audio encoding packet sizes and a wider peak as one approaches the maximum packet size.

Mananged bandwidth on SuperJANET require large bandwidth between sites for BaBar. They will try to run dedicated data paths across SuperJANET between BaBar servers. It will be a pilot for UKERNA's MBS to establish technical and political feasibility. The testw ill use PVCs taht by parts of campus networks and will compare bandwidth, latency & loss. There will be 2Mbps PVCs between Manchester, RAL and Imperial College.

They want to do some tests with UKERNA to carry QoS to users by multiplexing traffic over one circuit. They are looking at adaptive X11, telnet and ftp. 

UKERNA have concluded peering with Abilene in NY, CAR sets bits - causes problems with MAC tcp/ip. Expect peering with ESnet in Hudson St. NY Dec-99. UKERNA projects 3*155 Mbps at end 1Q2000, 4*155 Mbps by start of 4Q2000.

Conclusion physicists in UK have severe problems collaborating with experiments in the US. Network links to US are OK (at the moment) problem is peering with ESnet in NY. Short term exploit the UK-US I2 tests. There is also a HEPtel service for ssh & telnet via CERN. It is extremely useful, 64kbps only, there are no plans to drop this service.

We need to ensure there is an ESnet representative at the next meeting even if it is via a video conference.

Testing of WAN link throughput & Video - Harvey Newman

They want to look at performance between CERN & Caltech and other sites, see where limits are, tune stacks etc. to see what the limits are on performance end-to-end for a single flow. They put 23Mbps of UDP traffic and got 16Mbps. They then ran TCP and got 1Mbps with the default TCP stack setting (8KByte window size). After careful tuning of the TCP stacks (see http://www.psc.edu/networking/perf_tune.html) they got 10-13 Mbps of TCP throughput depending on competing traffic (this was with a window size of 655360 bytes). It is automated now to send send once every 5 minutes for 10 seconds. See http://sunstats.cern.ch/mrtg/netperf-cacr.html for plots of the data.

As a follow on they would like to get accounts and machines to run Netperf elsewhere including SLAC & FNAL. Work with Olivier on this.

The Virtual Room Video-conference System (VRVS) now has 2 people at Caltech and 1 at CERN  assigned to develop the system. It is now a production but not commercial system. The packaging needs much work and the support is not available. The problem is getting the basic MBONE tools to work on many platforms, interfaces (cameras, microphones, projectors) especially since new drivers and devices are appearing daily. It is available for several Unix flavors (including Linux) and NT. They will be adding MPEG2 support. They know how to integrate H.323 clients (e.g. NetMeeting) into VRVS. RAL are very interested in trying it out. There are 23 reflectors, there are about 1700 hosts registered, there were more than 2500 point to point connections from the beginning of 1999. MPEG2 can provide full TV quality, full frame & full interactivity in a range of 2 to 15Mbps. they are looking at the acquisition of a Minerva MPEG2 encoder/decoder, ESnet has already selected this as the preferred MPEG2 solution in ESnet.  A recent demo at the ESCC last month was a great success. One box will be deployed at Caltech & CERN. Are other HEP institutes interested in participation of the test and deployment (the cost is about $12K/site). VRVS has been used outside HEP, e.g. GLAST had a 10 participant VRVS conference in August. There are concerns about whether and how (Netmeeting uses multiple ports) to allow Netmeeting through firewalls.

Japan - Yukio Karita

See  http://www.ke..jp/~karita/icfascic-nov99 Taiwan, BINP & IHEP come in through KEK. NACSIS peers with Abilene & ESnet.  128kbps BINP, IHEP 128kb[ps, Taipei (Academica Sinica) 128kbps. Inter region 20Mbps to ESnet in progress.  NACSIS has 200Mbps to US (Abilene, Teleglobe). 

NACSIS (Sep-99) has 150Mbps to Cupertino, 2Mbps to London to TEN-155. Oct 99 250Mbps NACSIS to US/San Jose; from San Jose 30Mbps to NY and 30Mbps to STAR-TAP hence Abilene and CERN (CERN has a 5Mbps MBS to London and thus to NACSIS); from NY 15Mbps to London & 10 Mbps to Frankfurt. 200Mbps for general A&R traffic, 180 Mbps at STAR-TAP for NACSIS peering with Teleglobe for default traffic. Connectivity to US universities had been a big problem, now half solved by NACSIS peering with Abilene (but does not fix MIT, Harvard, Cornell, UT Dallas ...), can we expect all major universities be connected to Abilene and when? VBNS has become commercial (MCI) and will continue. 

Bandwidth for JP-EU is now 30Mbps of which 15Mbps is at NY with 10Mbps extended to Franfurt for NACSIS; peering with TEN-155 IP service. In addition there is 15Mbps at London . 

10Mbps for KEK's peering with ESnet at STAR-TAP. 10Mbps for KEK peering with ESnet at San Jose this awaits DOE/HENP funding. 5Mbps for KEK peering with CERN is awaiting a CERN-London circuit. 

Novosibisrk connected by a 2Mbps HENP dedicatetd fiber circuit connecting BINP & MSU in Russia. Connectivity KEK-MSU/ITEP 160-170 msec and 0%. This does not transit to ESnet.

As inter-regional connectivity for HEP improving to need to improve regional connectivity. the 128kbps for BINP, IHEP, Academica Sinica are too small. Replacing the leased lines with frame relay is being planned. A 1.5 Mbps PVC/CIR is cheaper than leased line.

They are looking at IP multicast with satellite with usual TV antenna & tuners could get 30Mbps. There have just started an ACFA with members from Thailand (Swichit), India (A. K. Gupta - CAT, M. Summit - NSC), China (Rongsheng S. Xu - IHEP), Korea (Hwan Bae Park) & Japan (Yukio Karita). 

In addition there are testbeds for Japan Europe Gamma in place since May 1999 provided by ESA & MPT. There is also an ATM PVC connecting KEK & CERN via 2Mbps satellite for testing MONARC. 

The next generation accelerators may have builders of detectors and accelerators at the lab, but the analysts will be remote. This will be a new paradigm and will require excellent networking. 

DESY Update - Michael Ernst 

There is a dedicated PVC for DESY (~ 3Mbps unidirectional from US to DFN) from NY to Hannover part of DFN's QoS pilot. The transAtlantic was upgraded from 155Mbps to 4*155Mbps by mid October 1999. This is distributed to 4 places in Germany (Cologne, Munich, Leipzig & Hanover) replacing the old 1*155Mbps to Frankfurt. The DFN PoP moved to NY/Telehouse. Upstream to UUnet also upgraded to 4*155Mbps (Packet over SONET). Abilene is accessible via DANTE/TEN-155 at 45Mbps, however, transit for DFN traffic via Abilene to STARTAP is prohibited so far.

In April 2000 the NRN in Germany will be based on a OC48 (2.5Gbps) backbone. Access speed to DESY will be upgraded to 155Mbps. Connectivity to Japan improved significantly resulting in a typical RTT of < 300 ms (formerly 500 ms) and very low packet loss. The DESY FSU satellite link now funded by NATO. There are currently 192kbps to Armenia, 128kbps to Georgia & 128 Kazakhstan. They hope to upgrade all NATO funded links to 2Mbps. Connectivity to institutions in the Moscow area (ITEP, LPI, etc) is provided by a 2 Mbps link from a commercial satellite provider (RSCC). This is entirely devoted to HEP traffic. DESY traffic volume is fairly stable around 1400 GB/month domestic and 200GB/month international.

The DESY LAN is being upgraded to structured wiring using Cisco based (bought 3000 ports, e.g. Cisco catalyst 6500s), typical user connection will be 10 or 100Mbps connection.

Internet access at CERN - Manuel Delfino

CERN has a CIXP (to attract fiber to CERN France Telecom has 2*OC48, Swisscom has 2*OC12 redundant SDH, new Telecom operators: DiAX, MCI/Worldcom, Carrier1, SUNRISE, SIG/Thermelec, Multilink(*), Smartphone(*), expect more telcos (e.g. COLT), in all there are 20+ commercial ISPs), connections to TEN-155, connection to the CERN PoP (August 1999 in Chicago at Ameritceh) in the USA and STAR-TAP (ESnet, Canada, vBNS, MREN & Abilene, Japan). Ideally would be one worldwide research network, reality is that there are several and CERN needs to allow all users participate in the physics.

QoS monitoring with various statistics, including PingER, Surveyor, traceping, Surveyor. DiffServ is the preferred approach to QoS.  QoS needed for video, VoIP. They saw some bugs in the CAR capable IOS and the capability has been disabled. 

They do not expect to buy another PABX. They want to set up a pilot with say 50 phones for VoIP driven by the data network folks, Denise Heagerty is working on authentication, security, Rainer Toebikke is doing VoIP and differentiated services. DESY is also keen to go ahead with a pilot. Also want to promote gateways to the local areas which is already happening. FNAL increased the channels to the PABX at its site from 2 to 4. 

The new sync-o-matic (from University of Michigan) allows synchronizing of slides with video etc. It is very manpower intensive (e.g. 4 times lecture time to produce the presentation, have to video in first case, then review and synchronize). See http://webcast.cern.ch/. Manuel gave a demo which was easy to follow. FNAL are also very interested in particular to make it more efficient. The end product is very valuable it is a great teaching tool.  There are problems with audio characteristics of rooms, echo cancellation, will we stay in our office looking at our laptop with a headset on (headset & mike reduces the needs for echo cancellation and extraneous noise etc.). There are high user expectations and exploding useage, however there are large support problems which at CERN is hitting at a time of manpower reduction & IT budget squeeze, but CERN has felt important enough to allocate 180 KCHF in the budget for additional support through outsourcing. The need is to evolve towards "appliance" w/minimum support.

CERN has a transparent Web cache from the Network Appliance folks.

8 telecom providers have been invited for tender for an up to 155Mbps transatlantic link by March 15th. They believe a 43Mbps will be budgetable. The current provider is C&W with an E3. 

There are VoIP problems between CERN & DESY due to an Ascend ATM switch bug. Matthias Kasermann reported that VoIP calls between FNAL & DESY have got worse in the last few months in some cases resulting in disconnects. This is not attributable to the Ascend bug. Vyto Grigalianas of FNAL provides the following information "It appears we are having some layer 2 problems with our BRI. We are in the process of relocating our 3640 (and the present BRI along with an additional BRI, both in a hunt group) to a more permanent location and I wouldn't be surprised if Ameritech (or should I say SBC) has messed something up in the process (although they deny it but I've heard that before). Anyway, we will try and track down our BRI problems..."

The local infrastructure has been invested in for 12MCHF, will continue to invest 1MCHF/year. Next step is to upgrade the remaining legacy it causes more than its fair share of problems. The existing 10Mbps with FDDI backbone will go to 10/100Mbps with a Gbps core, all Ethernet. The core backbone will be a star of core switches with Gbps to building switches and 10/100Mbps to desktops and 100/1000 for clusters. End user pays for things that connect to the building switches (e.g. another switch etc.) but the network group specifies, installs, manages etc. They are using Cabletron equipment. FNAL, KEK & SLAC are using Cisco equipment. Expect 10Gbps to be on market by middle 2001. Middle next year expect Gbps over copper units.

Report to ICFA - Mathias Kasemann

There was considerable interest in the expected bandwidth growth predictions. This will need more justification of the data flows.

Originally we (SCIC) planned to deliver a report to ICFA every 2 years in 2000/2002 for monitoring/recommendations, and updates in 2001/2003. In the past the working groups prepared a report that is then extracted in a report to the ICFA. The question is then what kind of report can we generate by February. Since reports were made August, October it may not be necessary to make a report in February.  It may be more effective for SCIC to report to ICFA on the immediate problem areas. One particular problem area is the outlying regions (outside the developed regions of the world, W. Europe, N. America and Japan) who have poor connectivity (and may be falling further behind) and yet are delivering important contributions to HEP. In many cases the countries need an enabler to push the Internet connectivity for the NREN.  The people in ICFA (e.g. the DG of CERN) are closer to the level that can have an impact on the political representatives in the regions. This can ask how we can help, what are the plans, what the countries can do, how best to spend limited money, suggest goals (e.g. a 2Mbps line to the country), participate in some R&D project with a western Lab/experiment that requires video conferencing for all collaborators to participate in, may help drive the requirement. We need a concrete recommendation that the ICFA can subscribe to. This proposal needs to be acceptable to the country, which requires pre-discussion with the country. WE need to plant the idea in the minds of the ICFA people and let them think about how to solve and where they can contribute.  

The ICFA meeting is Friday 10, 11 February at RAL. So we need something by the end of this year so we can polish it for the meeting. There will be a video conferencing meeting around 4pm CERN time, Friday January 7th. 

Besides the network status reports, we are hearing more about network aware applications. We need to help develop such applications.

We need to go to the ESSC to raise our concerns. The ESnet International meeting will be held by video. The next in person ESSC meeting will be held in Japan in March/April 2000.


[ Feedback ]