Voice Conference Meeting March 15th 2000

SLAC March 15, 2000
Rough notes by Les Cottrell, SLAC


Status Reports

Many people have submitted status reports to the NGI email list. This was very timely since Mary Anne has said "It is very gratifying to observe the impact your project is having on the HEP community" and has now asked for such reports:
>We have identified some resources within the office that we hope will 
>provide some additional funding this year for some of the projects 
>that were initiated in FY99 under the NGI program.  To prioritize I 
>need an up-to-date assessment and status of all the projects.  Would 
>you please provide that for me for your project.  Namely, what is 
>progress thus far, what progress is expected for the remainder of the 
>year and what could be gained by an additional resource on the order 
>of 3-4mo support.
It would be useful to know what the scale of funding is likely to be available. Stu believes prospects for 2001 funding are also reasonable. The funding may not come on a fiscal year boundary. We should guestimate that there will be roughly $300/400K for the remainder of this Fiscal year. We should figure out how we could use this money to show visible success by the end of the year. Harvey felt that demonstrating 100Mbytes/second between sites is an important measurable objective. The NTON OC48 link between SLAC & Caltech is in place but there are problems making the interfaces (8*OC3 at CalTech and 2 OC12s at SLAC) talk to one another and bond the links somehow. Another objective would be to make progress on the caching, storage broker, match making etc.

We need to get a better understanding advice from Dan Hitchcok of DoE/MICS. Another possibility is to talk to Tom Dunning. Stu will take the lead on this. We also need to get out Status Report together and make DoE/MICS etc. aware of it. Stu will work on this and distribute it for review when ready.

Request Broker

Part of the plan set up by Ari has been to pursue the functionality rather than the speed. The goal is to have something in Wisconsin that can talk to multiple storage brokers (looking at a replica catalog) and make a decision on where to get the data from. Can get files out of disk caches via SRB at LBNL and FNAL. Also looking at developing a distributed manager at LBNL which might use the LBNL DPSS as a staging system, getting the data from HPSS. Reagan reported that there is an interface in the SRB to go from HPSS to DPSS (i.e. there is an API to allow a request to get data from HPSS and write it into DPSS). There is another instance of the SRB running at SSRL/SLAC for biochemistry. Globus can access HPSS for user owned data but not to access a collection. SRB has both types of APIs defined (User and Collection). For near term Harvey said it is important to demonstrate performance.

There are now three possible sources of data for HEP, these include HPSS, CERN's Castor, and the Event Store Manager from FNAL. There will need to be drivers in SRB to provide access to these data stores. Ari has a proposed "standard" for the interface to the storage manager, and will try and publicize it in the next couple of weeks. We need to get this information to the CERN/Castor people, and there is a Grid Forum next week that the CERN people will be attending and presenting at, so it would be good to give them the information by then. Harvey will talk to the CERN folks (e.g. Les Robertson) before the Forum to gauge interest etc. The idea would be to get a commitment from CERN and FNAL to commit to providing interfaces to SRB. Today the interfaces are mainly at the file level. It is unclear how SRB and Globus will interact. The Globus GSI for authentication has been integrated into SRB. Next week, Reagan will be discussing with the Globus folks the integration of the Globus I/O with SRB. Ari will bring Miron up to speed on some of the discussions from the current meeting..

Are there plans to make the SRB at LBNL available for Clipper? This is part of the rationale of moving the HPSS data to DPSS which will help bridge the link to Clipper, and provide high speed access to ANL.

Concerning high speed data transfer between SLAC & CalTech, Davide Salomoni reported that SLAC is asking Sun if possible to have multiple OC12 interfaces available on a single OC12 board. There can apparently be only one IP address for each Virtual Path (VP). Thus it is not possible to accept 4 of the OC3s from CalTech. An alternative might be to use a Fore ATM card instead. James Patton of CalTech is said to have configured a Fore ATM card to have multiple IP addresses. so we can multiplex 8 OC3s. Could make tests with pure OC3s. OC3 between SLAC and Caltech, but this cannot demonstrate 100s of Mbytes/sec performance. Les & Davide will talk to Richard Mount to bring him up to speed and to see how to proceed.

Concerning staging, HPSS is not always configured (e.g. at SLAC) to use HPSS RAID configurations. Thus the DPSS might be a very useful addition for staging. Ed May reports that the fastest he gets HPSS to transfer data was about 4 Mbytes/sec, so he has been using DPSS to stage data and get it to ANL via the Clipper OC12/622Mbps link.

European Grid Activities

There is lot of enthusiastic interest in the UK, Italy and France and especially at the higher country levels. The plan is to put together a proposal to be submitted in May. There is a draft circulating. There is a proposal including NIKHEF, CERN, UK, Hungary and others for about 70M EUs (30M EUs for personnel, 12M EUs personnel for applications). They also try to indicate where the proposal differs or addresses issues not addressed by the PPDG.

CERN Upgrades

CERN will be upgrading to 43Mbps by the end of this month, and hope for 155Mbps by end of summer. Atlas want to transfer 4Tbytes by this Summer and 10Tbytes by the Fall.

Feedback