SLAC PEP-II
BABAR
SLAC<->RAL
Babar logo
HEPIC E,S & H Databases PDG HEP preprints
Organization Detector Computing Physics Documentation
Personnel Glossary Sitemap Search Hypernews
Unwrap page!
Det. Search
Who's who?
Meetings
FAQ
Images
Archive
Systems
Performance
Intern. region
Vertex Tracker
Drift chamber
DIRC
Calorimeter
IFR
LST
Magnet
Electronics
Trigger
Operations
Run Coordination
Contact Experts
Shift Takers Info
Operations Manual
Electronic Logbook
Ops Hypernews
Shift Signup
Check this page for HTML 4.01 Transitional compliance with the
W3C Validator
(More checks...)
DIRC DATA QUALITY

Like all other BaBar subsystems, the DIRC has developped a powerful framework to monitor its hardware status such as the quality of the data it is recording. Its two main components are the online monitoring (epics, JAS plots and Fast Monitoring) and the data quality. These webpages aim at focussing on the latter, i.e. on the check of the quality of the processed physics data. Yet, one can find below a brief (and incomplete) overview of the online monitoring.

  • Online monitoring
    The detector status is monitored 24/7 via epics, the slow monitoring system widely-used in BaBar. Several epics panels are dedicated to the main DIRC components: High Voltages (HV), Front-End crates (F.E.), water plant, N2 flows, optical fibers connections. Another panel summarizes the information coming from the DIRC background monitoring variables: scaler rates, scaler rate ratios, current excesses in PMT channels etc. An abnormal status leads to a change in the color of the corresponding variable (green = OK, yellow = minor alarm, red = major alarm, white = channel not connected) and can also trigger some alarm on the ALarm Handler displayed on one of the pilot's consoles in IR2.
    Most of the alarms are minor and transient: they fix by themselves after a few seconds. Alarms more persistent and/or more severe have to be addressed, either by the shifters if they are instructed to do so (guidances are usually available for a given variable) or by the proper subsystem oncall experts who get paged in case of serious problems, in particular hardware-related.
    Parallel to the data taking, several histograms of raw quantities (for instance, the number of PMT hits per sector, per HV channel or per front-end board) are recorded for a fraction of events accepted by the L1 trigger. These Fast Monitoring have two main purposes: first, to feed the live JAS plots the DQM shifter checks regularly to control the quality of the data taken; second, to provide expert monitoring plots for each subsystem.

  • Offline monitoring ("Data Quality")
    The processing of the BaBar data follows a quite complex scheme which is nicely summarized by the following drawing, taken from Jeff Kolb's talk at the Data Quality Group (DQG) parallel session during the December 2005 Collaboration meeting.

    OPR schematics The BaBar data processing scheme

    The main processing steps are the following.
    1. The data are taken in IR2; each run produces an .xtc file containing the raw data.
    2. During the data taking, a 'calib. xtc file' containing a subsample of the recorded events (Bhabhas, dimuons etc.) is also produced. It is used as input for the Prompt Calibration (PC) pass which is done at SLAC, a few hours after the data have been taken. For the DIRC, the PC step produces 13 T0 correction constants: 1 'global' and 12 'per sector' which are used to correct the PMT hit timing. Such quantities are generically called 'rolling calibrations' as they use the constants computed during the previous run processing as input. To make the calibrations 'roll' well, runs must not be too long to avoid drifts (the current limit in IR2 is 55 minutes) and the runs must be processed in the same order as they are taken.
    3. The new calibration constants are written in some Conditions databases. They are used for the full processing of the run -- the Event Reconstruction (ER) -- which is done in Padova in a huge farm of machines. Data are automatically transfered from SLAC once the PC path is completed.
    4. The Data Quality Group checks the two steps of the processing: PC and ER. After this step, a run can have three flags: 'good', 'flawed' or 'bad'. In the two first cases, the run is then included in the output collections and can be skimmed; data from a run with a bad status are not used for analysis. A flawed run is a run which doesn't look completely OK for a given subsystem but whose problems are not severe enough to reject its data. If more than one subsystem declare a run flawed, its status may become bad after discussion in the Data Quality meeting.

    The 'live' OPR status is summarized in this webpage.


The table below provides the main links related to the DIRC Data Quality checks.

Data Quality Documentation Detailled description of the OPR QA plots
How to check runs quality from scratch A short FAQs


This page is maintained by Nicolas Arnaud
Last significant update: October 06 2005