DIRC DATA QUALITY DOCUMENTATION
Each subsystem (subdetectors, trigger, PID, Breco and physics) has one expert
in the Data Quality Group (DGQ). Its main goal is to
assess the quality of the data taken in IR2, either processed or
reprocessed(*) in OPR (Online PRocessing).
(*)If the conditions for a given period have improved -- for instance,
the first days of running after a shutdown period are processed again
once the SVT local alignment and several constants (Bhabha and dE/dx)
are computed from the new data -- or if some bugs are found in the release
used for the initial processing, a set of runs can be reprocessed.
Reprocessed runs must be checked as carefully as
newly processed runs as the new processing can completely screw
up data which are initially good (bad release, problems in some DB etc.).
The DQG meets weekly, currently every Thursday at 10.30 am (SLAC time). The
meeting is announced in the Data Quality Hypernews to which all DIRC DQG experts
must subscribe. This hypernews is used to let the experts know the successive
task assigned to the DQG, to report the results of the checks and to exchange
information when problems are detected.
Runs are checked on a weekly basis. At least two days before the meeting, the
Run Quality Manager (RQM) sends on the hypernews the list of runs whose
quality needs to be checked. Such lists are archived in the Stripcharts web page.
How does the data quality check work?
Among the various outputs of the OPR processing (PC or ER) of a run,
there is a root file (used to be an hbook file until the 18 series)
which contains the monitoring histograms of all subsystems. This root
file is automatically processed by a set of ROOT macros which produce a
postscript file used by DQG experts to assess the quality of the run.
The DIRC plots cover about a dozen of pages. DIRC specific histograms
are defined, filled and processed in the
Obviously, one has no time to check the whole set of plots for each run:
there can be several hundred of runs to be checked in the weekly list!
Therefore, an automatic monitoring has been setup to decrease dramatically
the number of runs whose quality requires a detailled check. The DIRC part
of this code was originally written by Malcolm John
in a standalone DIRC package. Early 2004,
it was moved inside the official
package by Nicolas Arnaud with the help of
Xavier Giroux. Several functionnalities were added
Each subsystem defines a list of quantities important for their part
of the monitoring. For each processed run, they are computed from the
root OPR file and their run-by-run evolution is monitored via stripcharts.
For each significant stripchart, a nominal range is defined in the code. If
one stripchart goes outside its nominal range, an automated message is
generated. All warning messages associated to the same run are gathered
together and included in an e-mail which is sent to the relevant expert once
the whole list of runs has gone through the automated checks.
Runs which do not fire this monitoring, done via the
package are certainly good. Those which generate some warnings may need
to be checked manually by looking at the detailled plots on the postscript
files. As all stripcharts are not as important, it is on the expert's
judgement (based on his/her experience) to know whether a given run has to
be checked or not.
I (Nicolas) am not aware of any run flagged bad by the
DIRC which wasn't obviously bad at first look. Of course, this statement
does not mean that DIRC QA checks do not require full seriousness and care
from the expert. It rather aims at avoiding that a new expert gets worried
if it receives quite a lot of warnings the first week. Most (if not all)
of these runs are certainly good for physics: just check them to verify
Generally speaking, it is better to keep the nominal ranges of the stripchart
variables tight. This may lead to several false alarms but should avoid
ANY false dismissal.
Once you have reviewed the quality of all the runs, you must summarize your
checks by replying to the hypernews sent to the RQM. Even if you have nothing
special to report, it is better to go (or to connect to) the DQG meeting to
listen to the data taking and processing status and to exchange information
with other subsystems. For instance, a problem with the DIRC is likely to
decrease the performances of the PID selectors.
Monte Carlo release validations
In addition to the run quality monitoring, the DQG is
required to validate the releases which are used to produce Monte-Carlo
data -- if you didn't realize this yet, you'll soon understand that BaBar is also a generator of software releases...
For this purpose, MC samples of several decays (from a few kEvents to a few
tens of kEvents per mode) are generated and processed in the new release. OPR
postscript files are generated and checked to assess the quality of the code.
Contrary to the data monitoring, no automated check can be setup for this
job: the DIRC histograms do depend on the decay mode! Therefore, the only way
to validate a release is to compare the QA plots one by one with those produced
in the last validated release for the same SP mode.
As this job is quite time-consuming and can interfere with the weekly checks
of runs quality, the Validation Board has been setup
end of 2005 to make sure that the DQG wouldn't het overloaded by various
tasks ongoing on parallel. For more information on it, see the physics and
DQ talks of the December 2005 Collaboration Meeting.
This page is maintained by Nicolas Arnaud
Last significant update: December 15 2005