NGI/PPDG planning meeting

Caltech 5 August, 1999
Rough notes by Les Cottrell, SLAC

Introduction

These notes are incomplete since due to a error on the part of the author the notes from the first 2 hours of the meeting were lost.

There were about 18 attendees from CalTech, SLAC, LBNL, ANL, FNAL, UCSD, U Wisconsin & Florida U.

Matchmaking - Myron Livny

http://www.cs.wisc.edu/condor

The Condor focus is on high performance sustained computing, i.e. FLOPS/year (aka FLOPY). There are multiple management layers: application, application agent, customer agent, environment agent, owner agent, local resource management, resource. The environment agent is the matchmaker. Parties (owners and customers) use classified ads to advertise properties, requirements & ranking to a matchmaker. ClassAds are self-describing (no separate schema, i.e. have name and value) and ClassAds combine query and data. Price can be one of the fields in a ClassAd, as can the affinity of a customer with priorities associated to affinities (e.g. owner of machine has higher priority than research group has higher priority than a friend of the research group). The matchmaker parses the requirements and matches to resources. If ClassAd A & B match (A.Constraint & B.Constraint evaluate to true in the context of an environment). There needs to be some standardization on terms used (at least within a community) as to what they mean (e.g. if advertise a car with an unknown model then people can't decide on whether the advertise matches their need). The ClassAds are pushed to the matchmaker. The negotiator in the matchmaker wakes up and talks to the customer agents to get the current status of the machines. It also talks to the customer agent to get the customers current prioritized requirements and then can match the customer with resource. Then there is a claiming process when the customer and owner to decide on leases and how to relinquish (e.g. if network goes away).

Condor has cpu cycles to offer. They are already running jobs in Wisconsin for CalTech. They have over 500 cpus with over 200GBytes of disk space. We are welcomed to pick up and use.

Earth Systems Grid - Arie Shoshani 

There are overlaps and correspondences between the Earth System Grid and the PPDG. The earth systems grid data is typically time related (e.g. want one variable value from multiple/all time ticks), which is not usually a feature needed in the PPDG databases. 

They have the concept of file filtering at the sites (both local site or a remote/caching site). This can dramatically reduce the bandwidth requirements. In the first version there will be no filtering. There will be the ability to assemble the file from  multiple sites. Another requirement is how to have a unique file name identifier across the system. There are 2 parts, the global system ID and the local name. They suggest using LDAP conventions for the global system IDs. Another issue is the file format (they settled on NetCDF). A big issue early on is reservation & coordination of space and also devices & bandwidth for moving data from storage to storage. Also there is the associated issues of how to pin & reclaim.

Globus project, data grid toolkit. storage API, beta grid- Ian Foster

They are building grid services (protocols, authentication, policy, resource management ..) which need application interfaces to go on top (e.g. an interface to remote data applications such as PPDG). It is a joint effort LBNL, ANL, SDSC. NCSA etc. There is a storage client (SC-) API for storage management, access, 3rd party transfers, property manipulation with striping, caching, protocol parameters, synchronous & asynchronous calls & integrated instrumentation.

FermiLab initial workplan - Vicky White

FNAL are eager and starting to try and put together some prototypes to try out the PPDG ideas. Potential contributions: data & experimenters who will use grid; HPSS & Enstore storage management + many TBs of disk space; existing framework with SAM; CMS work with Objectivity. They have 0.2+0.5+0.2 FTES already working on, plus 1FTE to hire, plus Ruth & Vicky. Want to do file replication (section 4.1.1) between FNAL & Indiana will try HPSS and some parts of Globus with NetLogger  and track file status & performance. Another project will be data staging & caching of file using SAM framework (section 4.1.2)  & Condor with D0 data => caching at D0 university/lab satellite (e.g. Maryland, LBNL).

Managing distributed data & meta data - Arcot Rajasekur (SDSC)

SRB (Storage Resource Broker) is an intelligent data access system which provides federated access to diverse databases  and distributed storage systems  transparently (protocol transparent) with location transparency for the user Working with NetworkWeatherService to determine where is the best place to get a file from. It has a built in encryption (but will also provide the GASS infrastructure in the next release so have a single sign on). SRB talks to MCAT.

MCAT is a cataloging system, a meta data repository (digital objects, system level meta data (access control, audit trail), schema level meta data (semantics of relationships)). There are 4 MCAT installations with 90 registered users.

The SRB design is to provide middleware between clients and resources. provides a logical abstraction at every level.

Experience with OOFS - Andy Hanuschevsky

Need robust system, includes recovery from crashes must not take the system down. AMS is a page server, it is not cognizant of objects.  Disk cache is a high performance file system (Veritas). AMS is basically a protocol layer between the application and the OOFS (file system logical layer), below  OOFS is the OOSS (the file system physical layer). Currently OOSS interfaces to the Veritas File system and with HPSS. SLAC designed and developed the OOFS and OOSS. Will have a defer request protocol (transparently delays client while data is made available); opaque information protocol (allows client to provide information for improved performance); request redirect protocol (redirects the client to an alternate AMS provides for dynamic replication and real-tome load balancing); generic authentication protocol (allows authentication of incoming request via Kerberos, pgp, rsa etc.). At a more detailed level have increased from 1024 to 8192 file descriptors, modified logging so only physically copy changes, can use NetLogger analysis / reporting tools. Important to separate the tape drive server so it is not on a critical (reliability wise) server, since tapes die in fashions that require a reboot. Today's Gbit networks enable such separation of the tape server from the data servers.

AMS is an ideal PPDG server application. Proven to work with INFN staging and CERN rfio, it is a real world application not only used b y BaBar but by several other parts of the world.

Report from NGI Testbed meeting at Berkeley - Les Cottrell

See http://www.slac.stanford.edu/grp/scs/net/talk/ppdg-aug99/

NGI network configuration: plans & problems for CAGR - James Patton

Only CalTech is currently connected to NTON, others who want to connect include: SDSC, SLAC, LBNL. Other (more production networks include: CalREN2 (already peering with Abilene on the North ring, supposed to be doing on the Southern ring) - for CalTech & SDSC; ESnet - SLAC, LBNL, ANL, FNAL & JLab; MREN - ANL, FNAL & Wisc.  The interconnects between the AS (Autonomous System) networks is currently only 43Mbps. For the first deliverable (100MBytes/sec) we need NTON for anything outside LBNL & ANL. Big question is how to make reservation policies and usage restriction across multiple networks. Once the pipe is on the doorstep where does it go, is it routed over the campus backbone, are there dedicated links to participating researchers. How does one route for machines using both commodity & dedicated networks: routing is destination based, do we renumber, do we use VPNs. Each site needs to provide a diagram of the LAN configuration as it pertains to PPDG, James will send out an email request, Les will set up a mailing list for the PPDG LAN/WAN contacts.

Another issue is what interconnects does a site have to the outside world, is it ATM or POS, is the equipment already on hand, what equipment needs purchasing, what is the MTU (ATM 9180, POS 4470), how to configure for maximum bandwidth. 

An action item is to get a commitment from NTON to provide service/connectivity  between SLAC, LBNL, SDSC, CalTech. Following this we will need to work with ESnet to peer with NTON. Larry Price (chair of ESSC) offered to be of help in the latter activity.

Wrap up

We came up with a list of two people that need to be hired. One is a project coordinator, the other an implementer. It is unclear where they will be located.  Richard will put together the requirements (skills & duties).

Several working groups were set up (the names in bold face are the leaders). 

Request manager (to cover Object-based & file based application services interface to the file access service & cache manager): Arie, "Ian", Miron, and Andy plus an application person from each of CalTech, FNAL, BNL & SLAC.

Condor + PPDG director / SRB (Oracle), LDAP; cost estimation (think about API) resource discovery & characterization, covering Matchmaker service, file replication index: "Reagan", "Ian", Miron + an application person.

Globus storage client API to the file fetching service, the file mover (FTP/PFTP/...) and end-to-end network services: Ian, "Reagan", "Arie".

Thursday September 23rd meeting will be the next PPDG meeting, it will be at SLAC.

Dave Millsom will coordinate the network working group.


Feedback