Babar logo
HEPIC E,S & H Databases PDG HEP preprints
Organization Detector Computing Physics Documentation
Personnel Glossary Sitemap Search Hypernews
Unwrap page!
Det. Search
Who's who?
Intern. region
Vertex Tracker
Drift chamber
Run Coordination
Contact Experts
Shift Takers Info
Operations Manual
Electronic Logbook
Ops Hypernews
Shift Signup
Check this page for HTML 4.01 Transitional compliance with the
W3C Validator
(More checks...)

Level 3 Trigger Task List

DRAFT, 16 February 1997

This is a fairly free-form and sketchy start to an evolving task, issue, and problem list for Level 3 work. There are still quite a few bits missing. Comments and, a fortiori, offers to work on tasks are always welcome - please e-mail them to Gregory Dubois-Felsmann. The level of detail given is highly variable, reflective of which questions I've been tending to think about more. More detail in other areas is welcome. If it gets too unwieldy, I'll turn this page into a hierarchy.

I'm planning to include a discussion of this list or a summary of it in the Level 3 mini-plenary on 19 February.



  • General algorithms based on charged particle tracking
    • Level 1 DC trigger TSF segment-based tracking
    • Adding the actual drift chamber hits to tracks found from segments
    • SVT-only tracking and vertex pointing
    • Joining SVT and drift chamber track information
  • General algorithms based on calorimetry
    • Orthogonal algorithms for general B physics
    • Algorithms for all-neutral events
  • Applications for the DIRC or the IFR in Level 3?
  • Algorithms to identify specific processes (e+e- physics and background)
    • Priority processes required to be flagged for special treatment in OEP: B physics, Bhabhas, dimuons
    • Which others may be interesting (e.g., high momentum cosmics for calibration purposes)?
  • Steering algorithm & dispatch algorithm
    • Coordinate all algorithms and handle special features like prescaling and random triggers


  • Common "mini-module" class interface for algorithms? (related to implementation of steering algorithm)
  • Performance
    • Measure efficiencies & rejection factors
    • Measure speed; obtain progressive estimates of CPU requirements as input to online farm procurement
    • Determine memory requirements and adopt appropriate limits (note that this will be an issue not only in the OEP machines but also in full-detail Monte Carlo jobs)
    • Develop a standard package (regression test?) for evaluating performance
  • Reliability, QA/QC
    • Special coding standards (OEP-wide?)
    • Exception handling
    • Protection of the raw data:
      Level 3 is one of the only OEP modules that will be permitted to change the event, albeit nominally only by adding to it. Can the rest of the event be protected from accidental change?
    • Infrastructure for final tests of new algorithms in the production environment ("alternative Level 3's")
  • Code development and management
    • Return to using agreed-upon stable base releases in SRT when vendor compiler switchover is done (when possible)
    • Include Level 3 in CVS as package(s), for history and cross-development; think about if and how to include in releases (linked to online code management issues)

Interfaces and Interactions

Level 1

  • Data format from L1 trigger processor, and determination that all available useful information is really being returned
    • Support of two-photon triggers is an open issue
  • Possible parallelized preprocessing for L3 algorithms' use
  • Synchronization of TSF lookup tables (configuration management)

Event structure

  • Data format from all front-end systems - optimized for L3 access, if required (L1 covered above)
  • Format for Level 3 output data in event
    • Save all data needed for dispatching, displays, diagnostics
    • Make self-serializing (online raw data buffer requirement)
    • Does all monitoring data need to be logged to permanent store?

OEP Framework and offline frameworks

  • Interaction with module interface, especially in the event dispatch function
  • Should be runnable in generic offline environment for testing


  • Statistics
  • Event display subpackage for Level 3 results
  • Histograms
  • Correlation with reconstruction as test of tracking


  • BaBar simulation system (BBSIM and then GEANT4) must be able to provide simulated raw data for Level 3 analysis, ideally in the actual online format (perhaps using an additional filter)


  • Any roles for something like Level 3 in calibration? (Example: identifying Bhabhas)
  • Dependencies on online and offline/PR calibration constants, including dead/hot channel lists? (Configuration management, again)
    • Dead/hot channel lists
    • Precision required for calibration constants? Must follow all updates?
  • Is online calibration of Level 3 a meaningful concept? Can one imagine testing Level 3 with low-level pattern inputs? Are playbacks useful in this respect?

Configuration and control

  • Mechanism for obtaining any Level 3-specific configuration information in the online? (Run Control? Conditions database?)
    • Algorithm tuning (e.g., vertexing cuts)
    • Dispatch tuning to allocate use of 100 Hz logging bandwidth


  • Continued preparation of requirements and requirements document
  • Organization of the CD and PD reviews
  • Definition of QA/QC procedure
  • Definition of a detailed schedule for key tasks, especially service tasks not directly related to algorithms

Gregory P. Dubois-Felsmann
Last modified: Mon Feb 17 02:40:15 PST