Trigger Configuration with DbProxy


Introduction

The ATLAS High Level Trigger (HLT) farm will eventually consist of 500 computer nodes for Level-2 trigger (L2) and 1600 Event Filter (EF) nodes to crunch on the 100Khz events triggered by the Level-1 hardware trigger to reduce that to <200Hz of interesting physics events for logging. The computing nodes will have 8 CPUs cores each so that the whole system involve >16,000 HLT processes running simultaneously. One of the major technical difficulty in operating such a system is the configuration of such a huge system with thousands of processes simultaneously demanding ~30MBytes of configuration data from the online database at the start of a run. Without a controlled access, the configuration traffic-jam at the database could take ~45min to clear which would severely impact the ATLAS datataking efficiency. SLAC is responsible for delivering the DbProxy system with a parallel distribution tree of proxies holding cached information. A prototype DbProxy using mySQL protocol had been operational in ATLAS for sometime and some interesting operating experience was gathered. For the final system, a version which work with Oracle is necessary. The joint development between SLAC and CERN IT for a new system which speaks the technology independent CORAL protocol is currently coming close to a first test of the full chain. An intensive effort of commissioning of the new DbProxy system is expected this summer to get ready for the first beam.      

More detailed information on the project status can found on the SLAC ATLAS TDAQ web page, under the section HLT Project: HLT configuration scalability and online database interaction. There are various talks on the motivation and original development, test experience, current status.  

Project Tasks

 For the intensive commissioning of the new version of DbProxy, the expected activities for the student include (but not limited to):

Su Dong & Rainer Bartoldus


Last modified:  Sun May 4 23:52:06 PDT 2008