Rough Notes on HENP Grid Meeting Padova, Italy Feb, 2000

Author: Les Cottrell. Created: February12, 2000

Page Contents

 

 

 


PPDG - Richard Mount

The PPDG does not provide links, they come from ESnet, NTON, MREN etc. Particle physics is a network hungry application. The proposal is a collaboration of ANL, LBNL, BNL, Caltech, FNAL, JLAB, SLAC, SDSC, Wisconsin & includes physicists, computing/network infrastructure support people, and computer scientists. It was funded in FY99 by DoE/NGI money which it was hoped would be continuing. PPDG got $1.2M (60% of what was asked for). For outgoing years DoE/NGI is not being  considered, but we are looking & hopeful for future money.

100MB/s 2-5 physicists raw data, 1000MB/s 10-20 physicists (scheduled reconstruction), 2000MB 200 physicists (scheduled production analysis), 4000MB/s ~300 physicists (chaotic individual analysis). Need to access data across WAN making data analysis from hundred of US universities. Will use QoS (not investing much effort in 1st year, but plan to take advantage of the services as they come available), distributed caching, & want  robustness. First year deliverables include high speed site to site replication service at 100MB/s, multi-site cached file access service (based on deployment of file cataloging & transparent cache management & data movement middleware), first year will have optimized cached read access in file in the range of 1-10Gbytes from a data set of order PB, we will use middleware components already developed by the proponents. The first year deliverables will probably not be very useful to others.

Will need to play with windows and parallelism in order to get throughput, especially on WAN. Part of the goal is to allow high speed bulk transport to occur while not preventing other work. We are working with ESnet and there is a research test bed. A typical site has several TB of disk space, hundreds of computers in farm. Components existing include Objy/Oofs, SAM, GC cache manager, HPSS, Enstore, SRB, Globus, Condor etc.  They need to be integrated which partially means defining and providing APIs.

Project start was in August 1999. There have been demonstrations & test of middleware, we have glass path Caltech to SLAC, working on high speed transfer, SRB is in use by 3 sites (SDSC, Wisc, LBNL). Coordination is very important especially as viewed by the funding agencies, so management is a very important early effort. 

Existing achievement has got 57MB/sec SLAC to LBNL. Objy works with hundreds of workstations. Hope for 100MB/s via NTON in California (SLAC, LBNL, Caltech, LLNL). Will use OC12, OC3 over T3 as available, need a bulk transfer service, latency important, co-existence with other users. There are technical challenges but a major challenge is political, e.g. how to get continued sources of funding, how to make the collaborations work (collaboration management).

Towards US (and LHC) Grid Environments in HENP - Harvey Newman

The resources are data generation (detector), tier 1 center (1TIPS (25KSI95)), tier 2, 3 & 4 centers. Real problem is to provide access to all the data for hundreds of users. Part of the problem is to understand the scope of the problem and to be able to explain and justify it to others.

Grid structure includes HEP applications at top, application toolkits (remote data toolkit, visualization, collaboration, sensors, comp), gridware (protocols, authentication, security, matching requests to resources, resource discovery, pre-fetch query estimation, forward prediction, prioritization, policy, caching, instrumentation, ...), all built on computers, networks etc. This requires a collaboration of HENP people and computer scientists. The OO software/data has to be integrated with the Grid middleware and to be invisible to the users. Politically need inter-facility cooperation at a new level across world regions, agreement on choice and implementation of standard grid components, services, security & authentication, interface the common services logically to match with heterogeneous resources, performance levels and local generational requirements, accounting and exchange of resources. 

Solution could be widely available to data problems in others scientific fields and industry by the time of  LHC startup (2005). 

Condor and (the) Grid - Miron Livny

 http://www.cs.wisc.edu/~miron/, http://www.cs.wisc.edu/condor/

This is one of the Computer Science components of PPDG.  One question is how to build communities into effective organizations. There will probably be many grids and we should plan that way from the start, e.g. how do we access multiple grids from a single machine which also may be a component of a/the grid. The main capabilities of Condor are the management of large collections of distributed owned heterogeneous resources, management of large collections (10K) of jobs, remote execution, remote I/O, how to find out what happened to a job, check-pointing (i.e. how to free computation when resources not available, and restart later - e.g. if user want to get good interaction on their machine, part of allowing good co-existence), matchmaking and system administration (upgrades, new release roll out). 

Condor has over 4000 machines managed by Condor today, more than 1200 at UW (most often desktops not locked in computer center), more than 200 at INFN, more than 800 in industry (e.g. Oracle does its daily build on Condor cluster).

How does one get Condor to work on your problem which is multiple (600 jobs) that will in sequence run for a couple of months. First step is to get organized: turn the workstation into a personal Condor, write a script that creates 600 input files for each of calculations. Condor will keep eye on jobs & keep posted of their progress, implement your policy on when jobs run, implement your policy. Step 2 is to extend to other parts of Condor grid, i.e. take advantage of your community friends. First need permission of friends to use their pools of computers (referred to as flocking). Then need to get access (accounts) + certificates to Globus managed Grid resources and submit 599 jobs "to Globus" as glide-in Condor jobs to your personal Condor, when all jobs done remove any pending jobs and may be done in an afternoon. The "To-Globus" glide in job will transform into a Globus job, submit itself to a Globus managed Grid resource, be monitored by your personal computer, once the Globus job is allocated a resource it will use GSIFTP server to fetch Condor agents, start them, & add resources to your personal Condor, validate the resource before it is revoked by the scheduler. 

The obstacles to Grids are: ownership distribution (sociology, this varies from community to community, from technolog viewpoint need to provide tools for local administrator to control, shutting down, sociology is  probably the most difficult obstacle to address), customer awareness (education, need to set expectations to scientists that can do excellent research on KMart type computers), size & uncertainties (robustness, dealing with thousands of computers, with no single reboot button to get things back into a clean state), technology evolution (portability, for LHC for example how is one able to move with technology (porting, over a period of 10-15 years where there are major changes - will WNT or Linux still be around etc.), physical distribution (technology). 

Building a Production Grid - Bill Johnston

Looking at how to evaluate, integrate a collection of services into a high performance grid.  Requirements include lot of critical legacy code.

The UNICORE GRID project (EUROGrid) - Karl Solchenbach

Project funded by German Ministry to develop a prototype for seamless, intuitive and secure access to computing resources. The users are the German reseach centers and universities with high performance centers. Implementation is by Pallas & Genias, with partners/founders fecit & ESMWWF, plus as affiliates various computer vendors. The centers have different configurations of hardware, procedures etc. It is hard to get things to run at multiple sites. They wanted therefore seamless access to the computing resources with an intuitive interface for batch submission, with the same look and feel independent of the target system and configuration. Have to map abstract UNICORE specs to specific functions, map UNICORE certificates to local ones.  There are 3 tiers: user interface for job preparation and management, site security, job control network supervisor.

R&D towards a data grid - Ian Foster

See http://www.gridforum.org/

Will cover some of the details of the middleware services and the decisions made. In particular will review security, info directory, resource discovery... Heterogeneous in terms of group, resources, interface, distance etc. They plan to deploy standard grid services that encapsulate heterogeneity (simple, non-coervice and uniform) providing resource discovery and applications configuration and optimization.  

Security requires the definition of uniform authentication and authorization mechanisms that allow cooperating sites to accept credentials while retaining local control. Benefit is there is only one A/A infrastructure needs to be maintained at each site; enables intersite resource sharing and interoperability. This requires A/A standards and certification authority and policies. They have a single sign-on by global credentials (PKI), mapping to local credentials, support delegation, no plaintext passwords, retains local control over policy. Deployed across PACI & NASA sites. GSS-API bindings used by ssh, SecureCRT, gsiftp, Globus, Condor others. GAA (Generic Authorization & Access controls) interface provides hooks for policy.

The Grid information services allow: effective resource use predicated on knowledge of system components - publish structure and state info, dynamic performance info, software info etc., selection and scheduling of resources (find me an X with a Y at time T); gateways to other data sources required. The infrastructure is based on common protocols (e.g. LDAP). Research questions include: unifying metadata representation; how to support a range of access models.

Grid service management issues includes: locating & selecting resources; allocating resources; advanced reservations. The Global Research Allocation Manager (GRAM) provides uniform interface to resource management , integration with security policy; co-location of services coordinated allocation across multiple resources; Resource brokers, e.g. Condor 

Grid services also provide access to data with high speed file transfer. Tools for the management of replicas of large data sets. 

The current technology focus areas: high end data intensive applications, interfaces to commodity technologies, distance visualization.

The grid forum is an IETF like community to discuss & define Grid infrastructure (http://www.gridforum.org/). Two meetings June 16-18 & a 2nd meeting in Oct. There is now a European Grid Forum. There is a Beta-Grid proposal to NSF to plan & build a national infrastructure for computer systems research dedicated to research, of a scale that permits reasonable experimentation, encourage participation by adventurous application groups. Initial plan for O(20) Linux clusters, few hundred cpus/cluster, 2TB per site.

Today there is a solid technology base for security, resource management and information services. Globus 1.1 is completed with all core services complete, robust & documented, there are substantial deployment activities and application experiments, many tool projects are leveraging this considerable investment. 

Outline of the European GRID project - Fabrizio Gagliardi / Les Robertson

This is a snapshot of today, but it is moving fast, with a lot of work following CHEP2000. The background is EU-NSF discussions on transatlantic collaboration on IT subjects; there was an EU-US workshop on large scientific databases & archives in US last September; meeting between EU & HEPCCC last September; project proposal initiated & led by HEPCCC; kick-off meeting at CERN on Jan 11,2000; encouraged by EU to submit a proposal by May 10th.

The organization consists of: 2 task forces (see http://nicewww.cern.ch/~les/grid/welcome.html. The task force is coordinated by Les Robertson, proposal task force coordinated by Fabrizio, with participation by institutional representatives. The mission has a technical and political side. Participation: CERN, Hungary, FAE (Spain), IN2P3, INFN, RAL; interested NIKHEF, DESY, GSI & other German institutions. there are links to MONARC, European Research Network; LHC computing management and CERN computing management; potentially Sweden (medical) may join; project outline reviewed at CHEP2000; comments/corrections by tomorrow (Feb 13, 2000) midnight.

Industrial participation: have preliminary contacts with GRIDware, Siemens, Storage TEK. National institutions to propose industrial candidates by mid March. Equipment provider and middleware developer roles. 

HEP is pioneering due to immediate needs. Other sciences connections being established with biology, life science, earth observation, and meteorology; they attended some of the workshops.

The project outline focuses on:  management of large amounts of data; high throughput computing; automated management of both local computing  fabrics and wide area GRID.  Other strong foci are middleware development and test bed demonstrations.

The work program includes R&D required on adaptability; scalability & wide-area distribution of resources. Tentative packages include: computing fabric management; mass store management; wide area data management; work-load management; wide area application monitoring; application development adaptation. 

the resources national/regional part of GRID funded within countries; high-performance bandwidth across sites to be provided by other institutions (GEANT); EU financial support for development of middleware overall integration and operation of testbeds; support for exchange of staff and dissemination of information (workshops, conferences etc.)

The next actions are to get feedback to the outline; the project technical program to be defined at ITF workshop at CERN on March 7. Consortium and program of work need to be defined by March 25. EU proposals to be finalized by end of April.  Process approval/negotiation for the rest of this year. Possible project start early 2001, Foresee project duration till 2003-4. Second phase 2004-6 ??

The main issues are to define a valid & credible technical program of work in a short time. US participation is an issue, as is HEP & other sciences priorities, HEP & industry priorities and work schedule, relations to other similar initiatives in EU and US, project management (many partners, including industry, education, labs, countries, many languages etc.) and politics. Thinking of 20 FTEs (at 70Eu/year/FTE). Given the short time frame for the proposal the US participation will come later.

The UK 'Grid' Report - Paul Jeffries

History: Cashmore panel 1Q99 recognized substantial resources for computing in 2005, expect to base on a GRID model. IN 2Q99 UK government brings forward funding and HEP recognizes opportunity. Will have 4Mbps link to SLAC ($1.2M), big SMP at RAL (E6500 $0.3M) six Linux farms at Birmingham, Bristol Manchester ...

Pre-GRID activities include a BaBar bid to Joint Research Equipment Initiative fund. Got $800K, ($650K funded kit worth $2M). CDF request got 10TB disk, 4*SMP workstations, D0 also putting in bid. 

LHC requested prototype Tier 1 center to support all 4 experiments (100 physicists in 4 working groups). 14400 PC 99 equivalents, 296 TB, 2.0PB tape. 10 staff at RAL and 10 at universities. Notify in Autumn 2000. 

Want scientists to define need and computer scientists to assist in implementation. Called an e-science grid. There was a meeting in Jan at which scientists from many disciplines attended. PP was well prepared and recommended to go ahead, bringing on other disciplines later (next is bioscience). Paul Jeffries to coordinate PP + others with infrastructure communities (computer science and industry) with a meeting 13 March 2000 which will include bio-science. 

Want to define the actions: participate in testbed activities (EU grid), investigate QoS ...

A CLRC GRID team has been formed, within PP squeezed out of existing activities.  THere will be people hired eventually hope for 10+10 people. need to commission 4Mbps to Chicago

UK has an active interest in networking for many years, UKERNA uses IEPM PingER. Numbver of intersting network developments and tests, hopefully 50Mbps  connection to CERN. 

Exciting times in UK, have real money, support in high places for GRID development, particle physics encouraged to blaze trail. 

INFN GRID Project - M. Mazzucato

Study & deverlop a general INFN computing infrastyructure based on GRID techniques with Regional Center prototypes. Proposal submit Jan 13, 2000 with 3 year duration, next meeting with INFN management Feb-18. They have Globus tests, Condor on WAN  as general purpose computing resource. GRID WG to analyze viable way to proceed. Globus being evaluated at Bologna, CNAF, LNL, Padova & Roma1. Installed in 5 Linux PCs in 3 sites. GSI works. Problems with timeouts accessing MDS data. Looking at other tools. Will test/implement interface to Condor and LSF want a smart browser to implement a smart Global Resource Manager.

Condor is a large INFN project with ~20 sites with Condor team at UWisc. A goal is Condor tuning on WAN. A 2nd Goal is the network is a Condor resource (e.g. distributed dynamic check-pointing that uses minimal network traffic itself). There is a single INFN Condor pool they also have sub-pools. There is central and local management for Condor and a steering committee. Maintenance comes from UWisc. they want application monitoring & management, will need to instrument systems with timing information et., also want resource usage accounting. 


[ Feedback]