Trigger Statistics Survey

Rainer Bartoldus, SLAC

This page provides instructions on how to collect Trigger statistics on a BaBar analysis. The primary goal of these instructions is to support the Trigger statistics/efficiency survey that is currently being conducted to explore possibilities of a tighter L1 Trigger to be used for future high luminosity data taking in Run 6 and beyond.

Please send questions/comments to the Trigger Software HyperNews or contact Rainer Bartoldus.

Quick links: [Introduction] [Building your application] [Running the tool] [Understanding the output]


Introduction

PEP-II luminosity and BaBar trigger rate

As PEP-II is approaching higher luminosities, the BaBar Level 1 trigger rate is getting closer to the DAQ bandwidth limit of about 5 kHz. Prompted by this, the Trigger group has started a project to study possible new Level 1 configurations that can keep the L1 rate below the limit, while preserving most, if not all, relevant physics.

Trigger statistics survey

The first step in this is to conduct a survey of as broad a range of analyses as possible in order to get a comprehensive picture of which of our triggers are the most critical, which may be less so, and to learn about any inefficiencies or vulnerabilities that may already exist. The aim of this first survey is to identify a list of analyses that are the most sensitive to the trigger configuration. The plan is then to use those as benchmark analyses for any refinements in a second phase.

Trigger statistics tool

To this end, we have produced a tool that can be run at the Beta-level and that will collect statistics on L1 trigger lines, their efficiencies and correlations. In the following, I will give some guidance as to how to run this tool on a concrete analysis data sample.

An initial technical discussion of the tool and the histograms it produces can be found in the Trigger Software HyperNews. The kick-off of the project was announced in a plenary talk at the Montreal Collaboration meeting. A review of the plan and a look at some first results was given at the following September Collaboration meeting.

What statistics to collect

We all know (or take for granted) that L1 trigger efficiencies on generic event samples, say generic BB̅ events, are quite high and stable. What we are mostly interested in are the statistics for concrete signal samples, for instance, B → τ ν, where one relies heavily on the other B to trigger the event. Efficiencies are more critical in many τ+τ and two-photon analyses and can vary greatly from channel to channel. What's most relevant is to run and collect statistics on the actual signal events used (or intended to be used) in an analysis. After all, we are striving to preserve signal efficiency, not background, and we need to focus on the part of the signal that actually makes it through the analysis, not the one that is going to be rejected anyway. For the most part we will be focusing on signal Monte Carlo.

Building your application

This section describes how to build or find an application that collects statistics on your analysis.

Tags and base release

If you are using analysis-31, you will have to check out several extra tags to be able to use the new module. I recommend that you switch to the newer release, 18.7.1 (analysis-32), which includes everything you need. However, if there is some reason why you cannot, follow the instructions labeled analysis-31 below. Otherwise read the analysis-32 parts.

analysis-31 (= 18.6.4)

Based on the analysis-31 (aka 18.6.4) release, the following list of tags is needed to build the statistics sequence:
  Treating current directory as a test release based on 18.6.4
  TrgConfig    V01-08-00  (Release uses V01-07-00)
  TrgParser    V00-24-00  (Release uses V00-20-00)
  TrgTools     V00-10-02  (Release uses V00-05-00)
Depending on what application you are going to build with this, you might have to check out and rebuild other packages that depend on these. You can run the checkdep command to find all possible dependencies. Note that TrgConfig is a low-level package and therefore checkdep will report many dependencies related to online and simulation that are probably not relevant for your analysis application. For example, if your application were BetaMiniUserApp, no extra packages would be needed. But you should be careful not to produce an inconsistent build. If unsure, add everything that checkdep reports, or contact me.

analysis-32 (= 18.7.1)

The 18.7.1 release, which was announced as analysis-32, already contains all the necessary code.

Using a predefined application out of the box

There is a predefined application called TrgUserMiniApp in the TrgUser package that can be run (built) out-of-the-box without changes. It represents a minimal Beta application with the TrgStatistics sequence already included.

analysis-31

Again based on analysis-31, this can be done by adding the following two packages on top of the above list.
Treating current directory as a test release based on 18.6.4
PackageList  V00-13-23  (Release uses V00-11-01)
TrgUser      V00-00-00  (not in Release)
With these, run
$ gmake TrgUser.bin
and this will create the TrgUserMiniApp executable.

analysis-32

The 18.7.1 (analysis-32) release already includes the TrgUserMiniApp executable. There is no need to rebuild anything.

Adding the sequence to an existing analysis job

In order to add the trigger statistics sequence to your own application, simply add to the AppUserBuild.cc:
#include <TrgTools/TrgStatisticsSequence.hh>
...
TrgStatisticsSequence( this );
Then add these lines to your tcl:
sourceFoundFile TrgTools/TrgStatisticsSequence.tcl
path append MyPath TrgStatisticsSequence
You want to put the sequence on a path that sees (only) the selected events. Replace MyPath here with whatever is appropriate.

Running the tool

At this point you should have either your own analysis application with the statistics sequence built in, or a version of TrgUserMiniApp that is ready to be run over a collection. In the following, I will give examples for running TrgUserMiniApp. Your own application should be completely analogous. In any case, the tool will create a ROOT file with a number of histograms that contain all the statistics.

Running over a collection

I assume you have identified a suitable Monte Carlo signal collection. Go to the workdir of your test release and create a little tcl file like the one below. Suppose we want to run over two-photon events going into π+π, using this signal mode:
workdir> BbkSPModes --modenum 5936
: Mode : Decfile                         : Generator : Filter : Run Type                               :
: 5936 : gamgam-pipi.flat.2.1-4.5gev.tcl : continuum :        : 2-photon -> pi+ pi-; 2.1 < W < 4.5 GeV :
Retrieve the corresponding tcl file as usual:
workdir> BbkDatasetTcl SP-5936
BbkDatasetTcl: wrote SP-5936.tcl
Selected 3 collections, 11000/11000 events, ~0.0/pb, from bbkr18 at slac
Then create a tcl snippet like the following:
set NEvents 10000  # this is used by TrgUserMiniApp.tcl
set MCTruth true
sourceFoundFile SP-5936.tcl

sourceFoundFile TrgUser/TrgUserMiniApp.tcl
Save this in, say myTrgUserMiniApp.tcl. Then run the job as usual:
 workdir> TrgUserMiniApp myTrgUserMiniApp.tcl >& myTrgUserMiniApp.log &
You probably want to run this in a batch queue.

Upon completion, this will leave a ROOT file in your workdir called TriggerStatistics.root.

Renaming the statistics output file

If you want to rename your statistics output file, you can do this via tcl. For example:
module talk TrgStatisticsModule
  outputFile set "myTrgStatistics-[exec date +%Y%m%d-%H%M].root"
exit
You may find it useful to append an identifier for your analysis, decay channel and/or mode number to the file name. This will also help us to distinguish the output from various analyses.

Note: this currently only works if you check out the tag

  TrgTools V00-10-02
(which is not in analysis-32) and rebuild your application (or TrgUserMiniApp) as described above.

Using a timestamp file to select the events

If in your analysis your signal efficiency is low, and you cannot for some reason perform all or most of the selection at the Beta level, the statistics you would get from running over the entire signal collection would be dominated by events you do in fact not use. This would make the outcome of the statistics study less representative.

There is a way out. If you perform the main part of your event selection at the ROOT/ntuple level, what you need to do is identify your signal events and write out their event timestamps. Timestamps are often referred to as the upper and lower event id. If you don't know what these are, ask someone in your group.

You will then be able to feed these timestamps back to a filter module on the statistics sequence than runs before the statistics module. You will still run over the entire signal collection that way, but trigger statistics will only be accumulated for events that you pre-selected from your ROOT/ntuple analysis. In this case you don't need your own analysis application. TrgUserMiniApp can do the timestamp filtering. All you need is to save the timestamps to a flat file.

If you think this is the only way for you to participate in the survey, please get in touch with me, and I will post further instructions.


Understanding the output

At this point, we are asking that you send us (a pointer to) the output ROOT file(s) that you obtained with some description of what events you ran over and whatever further selection you did for your analysis. For those of you who like to understand what they see follows a quick description.

Creating plots

There is a ROOT macro that comes with the TrgTools package, that, in its default mode, creates a postscript file with four pages per L1 trigger configuration. You can create this postscript file like this:

workdir> bbrroot -b -q `searchFile TrgTools/TriggerStatistics.C`'("TriggerStatistics.root","ps")'
Note that this command didn't work for some people. In that case, start root, then type:
root> .L {proper_path}/TriggerStatistics.C
root> TriggerStatistics("TriggerStatistics.root","ps")
Substitute your own filename for TriggerStatistics.root if you changed it. (Watch out for the proper quotes. The first pair of backquotes is to localize the macro in your test or underlying release directory. The second set of quotes protects the parentheses and double quotes around the file name string.)

As said, if your collection spans several different L1 configurations, you get one set of plots for each one. Let's just look at the first set. But first we need to recap briefly what the trigger lines and trigger line categories that are plotted are.

Trigger lines and categories

GLT lines

The first 24 lines, starting with something like 3B&2A&1Z&2M are the individual trigger lines from the Global Trigger (GLT). They present logical combinations of GLT object counts such as 3B, meaning "at least three short tracks". A brief description of the GLT objects can be found here.

Auxiliary FCT lines

The next up to eight lines are auxiliary triggers that come either through one of the four external inputs in the Fast Control Gate Module (FCGM), or are generated in the Fast Control Partition Master (FCPM). One of the latter group is the random 1 Hz trigger called cyclic1.

Line groups or categories

The remaining line entries are groups (logical combinations) derived from the above individual lines. These are some of the most instructive things to look at. We will go through them one by one.

unprescaled / prescaled
These two categories count when at least one unprescaled (prescaled) trigger line fires on the event. On the events that triggered, the two categories are complementary, i.e., each triggered event is in either the one or the other category.

pureDCT / pureEMT / pureIFT / mixed
The pureDCT (pureEMT,pureIFT) category classifies events on which at least one trigger line fired that consists solely of GLT objects from one detector trigger, the DCT (EMT, IFT). mixed is everything that is not pure, which means that the four categories are complementary in the above sense. For completeness, pureDCT is the logical OR of (pureBLT || purePTD || pureZPD).

Example: The lines "3B&2A&2Z", B*&1A&1Z' and 1B are all pure DCT lines. Likewise, 2E, EM*>, 3M&M* are pure EMT. Whereas the lines 1Y&1B, M*&5U and 2Zt&1A&1M are mixed.

1-particle / 2-particle / 3+-particle
These categories fire if at least one one-particle (two-particle, three-or-more-particle) trigger line fired on the event. An n-particle trigger line is one that requires at least n particles, and no more than n particles to satisfy the trigger.

For example, 2Zt (two z-tracks with tight vertex cuts), B* (two short tracks back to back), 2B&1A (two short tracks, one long) are all 2-particle lines, whereas 1B (one short track), 1Y&1B (one backward calorimeter tower and one short track) are 1-particle lines. These three categories are also complementary for all triggered events.

all
This flag is set unconditionally on every event. When it is the only flag that is set (exclusive) it means that no trigger line fired. (This can only be true in the Monte Carlo).

Understanding the plots

The four plots you see all show efficiencies of individual trigger lines, or of combinations of trigger lines, normalized to 1000. When an entry reads 999, it means that the given line, or combination of lines, fired on average 999 times for every 1000 events.

Trigger Line Rates

The first plot shows overall trigger line rates (in yellow) with their exclusive contributions overlaid (in red). The overall line rates quantify how many times (per 1000 events) a given line or category fired, regardless of the other lines in the event. The exclusive contributions describe how many times an event was triggered on just the given trigger line, with all other lines off. This directly tells us how many events we would lose if we turned off the specific line or category.

Trigger Line Rates histogram

Trigger Line Exclusive Rates

This plot shows the exclusive rates from the overlay in the previous one, on their only scale.

Trigger Line Exclusive Rates histograms

Trigger Line Correlations

This plot shows how many times a trigger lines fired together, i.e., how many times a given trigger line fired when another trigger line also fired.

Trigger Line Correlations histogram

Trigger Line Anti-Correlations

The last plot shows anti-correlations of trigger lines, i.e., how many times a given trigger line fired when another one did not fire.

Trigger Line Anti-Correlations histogram

What to look for

First of all, remember to make your output file available to the Trigger group. We will be preparing a separate web page to collect all the results. Please let us know about what you got by posting to Trigger Software HN.

Here are some quick things you may want to look for yourself. They can all be read from the first plot called Trigger Line Rates.

That's it for now. Please feel free to ask questions. Thank you for participating in this. You will help us save BaBar some DAQ deadtime, while getting your voice heard for preserving your own physics signal.
Page author: Rainer Bartoldus
Created: 31-Aug-2006 Expiry date: 31-Aug-2008
Last update: 06-Aug-2007, by Kim Hojeong

Valid HTML 4.01 Transitional