HLT Release Control and Integration



1) Release build infrastructure


This is currently being migrated by Jiri to the offline-supported system which uses the Tag Collector (atlastagcollector.in2p3.fr) and NICOS (http://cern.ch/atlas-computing/links/distDirectory/nightlies/global/, http://www.usatlas.bnl.gov/computing/software/nicos) for nightly builds and tests.

Note that the Trigger project already shown on this page is the trigger software in the offline release, not the new HLT project. Offline releases include the Trigger project, which contains all the algorithms to do reconstruction and event selection in the HLT.

Proposal: Discuss with Jiri to find out how this is being set up, and to agree where initial work can best be focused.


2) Use of offline testing frameworks

ATN - AT Night

for short tests run as part of the nightly build system
http://cern.ch/Atlas/GROUPS/SOFTWARE/OO/dist/nightlies/nicoswww/atnight.html
short means <~ 8 mins
For examples, go to the NICOS Trigger project 1.3.0, opt:
http://cern.ch/atlas-computing/links/distDirectory/nightlies/projects/nicos_web_areaTrgOpt/
Click on a release name like rel_1
Then near the top you see "Integration+Unit tests results [click for details]"
- click here to get a pop up window of test status, e.g.
http://cern.ch/atlas-computing/links/distDirectory/nightlies/projects/nicos_web_areaTrgOpt/nicos_testsummary_1.html
- click on tests in the trigger group to see output.
- the scripts that run these tests are in
http://atlas-sw.cern.ch/cgi-bin/cvsweb.cgi/offline/Trigger/TriggerRelease/test/?cvsroot=atlas
 

RTT - RunTimeTester
for longer tests, run on a batch farm at UCL the next day
http://www.hep.ucl.ac.uk/atlas/AtlasTesting/
Our tests are defined here, they are currently not working, awaiting some changes in the RTT code, but they can be run by hand.
http://atlas-sw.cern.ch/cgi-bin/cvsweb.cgi/offline/Trigger/TriggerRelease/test/TriggerRelease_TestConfiguration.xml?rev=HEAD&content-type=text/x-cvsweb-markup&cvsroot=atlas
The memory leak check is probably the most interesting one, it plots the virtual memory usage vs no. of events processed and calculates the gradient. It doesn't show where the problem is but it does show the size of any problem. We are only using these for tests of the trigger code in offline releases, not HLT releases. The next step is to set up tests from the HLT releases in these frameworks too. For more information, discuss with Simon.


3) Running offline tests and AthenaMT


For AthenaMT, see recent tutorials, e.g. Calo trigger software: http://agenda.cern.ch/fullAgenda.php?ida=a058178#2006-01-30
See "Running trigger code quasi online" by Xin Wu. Also see Werner's description of the new AthenaMT:
http://agenda.cern.ch/askArchive.php?base=agenda&categ=a058178&id=a058178s9t11/transparencies
For offline trigger tests, see:  https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerInOfflineReleases#Nightly_tests
These are the tests run in ATN (see above).

The general instructions to set up and run an example are also shown on this page, along with the status of recent releases. The tests and configuration closest to what is run online are the ones named "*ReadBS", which you find here:
https://twiki.cern.ch/twiki/bin/view/Atlas/TriggerInOfflineReleases#JobOptions
e.g. athena.py -s -c onlyCalo=True TrigReadBS_Flags.py

For more information, discuss with Xin, John and Werner.

Proposal: set up a suite of automatic tests for detector and physics slices like John has done in the offline release. These would run in the HLT release (see below) following the successful completion of the correlated offline tests. To simplify maintenance, try to derive the configuration automatically from those used in the offline tests - perhaps by the addition of extra flags.


4) Subsequent steps

Having set up nightly builds in the new tag collector + NICOS system, implement the existing basic integration tests in this system (e.g.
HelloWorld).
Possible next steps:
a) Implement new tests based on new AthenaMT and HLT features such as exercising state transitions and possibly monitoring functionality.
b) Implement more complete tests of the selection software and offline/HLT integration as suggested in 3.
c) Testbed validation suite: some of the above tests should form part of a validation suite that is used for example to validate a new installation on a test bed before performance measurements are made.
d) Having set up any tests, provide feedback on software and configuration problems and learn how to help fix them. Help publicise them. These tests must become widely known and routinely checked by developers.

Documentation

Documentation about the Level-2 processing unit and the Event filter processing task can be found under this link:
http://atlas.web.cern.ch/Atlas/GROUPS/DAQTRIG/HLT/hlt-infrastructure.html
On the side panel under "Documentation" links to various documents can be found which describe the the data flow applications, the development model (see also link "athenaMT" and the HLT interfaces. Further documentation is available under "Useful links" and under "Reviews-->2005 HLT Reviews"
( https://twiki.cern.ch/twiki/bin/view/Atlas/HLTReviewsPage  )

The documentation describes the software up to offline release 10.0.6, HLT release 02-01-01 and tdaq release 01-02-00. From offline release 11.x.x on, changes have been made to the used state transitions, to the HLT/Athena interface packages. The updated documentation is not yet ready, as there is still development work going on for the HLT release. A preliminary documentation for the "new athenaMT" can be found on AFS on /afs/cern.ch/user/w/wiedenat/public/athenaMT/documentation/athenaMT-SW-Integration-v2.0.pdf
and in this talk
http://agenda.cern.ch/askArchive.php?base=agenda&categ=a058178&id=a058178s9t11/transparencies

Software Releases

See above link on side panel "HLT Releases". In the preseries and for large scale tests mostly offline release 10.0.6 with the corresponding tdaq and HLT release has been used. For this release also an installation image is available (see below). For offline releases 11.x.x development work on the HLT release is still ongoing. Preliminary HLT releases for offline releases 11.0.1 and 11.0.2 are available with HLT release HLT-03-00-0(0,1,2).
The complete functionality explained.

Installation Image

A complete setup of all needed software bundled in one installation image
to run either HLT algorithms in multi node partitions or to run tests with
athenaMT/PT can be found here: https://twiki.cern.ch/twiki/bin/view/Atlas/HltImage 

The latest image was made with offline release 10.0.6 and should be mounted under the directory "/sw". Versions with newer offline releases are not yet available. However the installation scripts available on the image in the directory "/sw/setup/trunk" should allow to install also newer releases. For further information please contact Jiri Masik.

In the directory "/sw/hlt/examples" the image contains example partition files for multi node systems of different size. Please consult the README file in the same directory.

Python Partition Test Scripts

Contact person: Andre dos Anjos
Documentation:
http://pcuw32/svn/image/trunk/implant/hlt/examples/scripts/README
If you want to check-out a fresh copy, do:
svn co http://pcuw32/svn/image/trunk/implant/hlt/examples/scripts <dir>

Release Status

Just an additional link from Jiri showing the current test suites
http://atlas.web.cern.ch/Atlas/project/hlt/admin/www/HLT-02-01-01/testsuite-i686-slc3-gcc323-dbg.log.html
 


| SLAC |

Su Dong
Last modified: Tue Feb 15 012:00:00 PDT 2006