SLAC PEP-II
BABAR
SLACRAL
Babar logo
SPIRES E S & H Databases PDG arXiv
Organization Detector Computing Physics Documentation
Personnel Glossary Sitemap Search Hypernews
Unwrap page!
Home
Workbook
 1. Introduction
 2. Accnt. Setup
 3. QuickTour
 4. Packages
 5. Modules
 6. Event Info.
 7. Tcl Cmds.
 8. Editing
 9. Comp.+Link
 10. Run the Job
 11. Debugging
 12. Parameters
 13. Tcl Files
 14. Find Data
 15. Batch
 16. Analysis
 17. ROOT I
 18. Kanga
Additional Info.
 Other Resources
 BABAR
 Unix
 C++
 SRT/CVS Cmds.
 SRT/CVS Dev.
 Sim/Reco
 CM2 NTuples
 Root II, III
 PAW I, II
 tcsh Script
 perl Script
Check this page for HTML 4.01 Transitional compliance with the
W3C Validator

(More checks...)

Tcl Files


Contents:


Packages come with two main types of files: C++ files, ending in .cc or .hh, and Tcl files, ending in .tcl. When you compile and link your code, gmake puts together all the C++ code to build an executable, like BetaMiniApp. But once you have the executable, the job of the C++ files is finished. Now it is the Tcl files's turn. It is Tcl files that tell your executable what to do. They are the ones that put the modules together to make the analysis path.

You have already used the Tcl file snippet.tcl to run your analysis job. And you have even acted like a Tcl file yourself, talking to the Framework to input data and change some module parameters. But now it is time to go all the way, and learn how to get Tcl files to do everything for you, so that you no longer have to talk to the Framework.

In this section, you will about the different types of Tcl files that come with BaBar packages, and how to work with them.

The section begins with an example, and then moves on to some explanations.


Example 5: Setting parameters in a Tcl file

In the last section, you made a new module, ParmExample, with seven run-time parameters:

verbose
production
enableFrames
nbins
pMax
pMin
trackList

You can create a Tcl file called ParmExample.tcl that sets all of these parameters.


##  ParmExample.tcl: Set parameters for ParmExample

mod talk ParmExample

verbose set f 
production set f
enableFrames set f
nbins set 50
pMin set 0.0
pMax set 1.0
trackList set GoodTracksLoose

exit

Now when you get to the framework prompt (">"), all you have to do is type:

> sourceFoundFile BetaMiniUser/ParmExample.tcl

This will execute your script, and the parameters will be set.

You can make similar tcl scripts for any other commands that you don't want to type all the time. For example, you may get tired of talking to KanEventInput, so you create the following tcl script:


## MyKanEventInput.tcl

mod talk KanEventInput
input add /store/SP/R18/001237/200309/18.6.0b/SP_001237_013238
exit

Then to execute the script, again you just type:

> sourceFoundFile BetaMiniUser/MyKanEventInput.tcl
at the Framework prompt.

Let's try this and see how it works. Start the job as usual:

ana41> srtpath <enter> <enter>
ana41> cond22boot
ana41> cd workdir
ana41/workdir> BetaMiniApp snippet.tcl

When you get your Framework prompt, execute your new scripts with the command:

> sourceFoundFile BetaMiniUser/MyKanEventInput.tcl
> sourceFoundFile BetaMiniUser/ParmExample.tcl

You can confirm that the input lists have been added:

> mod talk KanEvenInput
KanEventInput> input list
Collections:
       /store/SP/R18/001237/200309/18.6.0b/SP_001237_013238
Components:
            hdr
            tag
            tru
            aod
            cnd

KanEventInput> exit
>
and that ParmExample's parameters have been set:
> mod talk ParmExample
ParmExample> show
Current value of item(s) in the "ParmExample" module:

      Value of verbose for module ParmExample: f
      Value of production for module ParmExample: f
      Value of enableFrames for module ParmExample: f
      Value of nbins for module ParmExample is 50
      Value of pMin for module ParmExample is 0
      Value of pMax for module ParmExample is 1
      Value of trackList for module ParmExample: IfdStrKey(GoodTracksLoose)

ParmExample> exit
>
So everything is set up, and you can run the job as usual.
> ev beg -nev 40
> exit

Here are the resulting ParmExample histograms:

ParmExample histograms

Working examples of both files can be found in:

$BFROOT/www/doc/workbook/examples/ex5/


Tcl commands

The basic syntax for a Tcl command is:

command arg1 arg2 arg3 ...

The command is either a standard Tcl command, one of BaBar's special Tcl commands, or a Tcl procedure, which is a user-defined tcl command.

You already learned about many important BaBar tcl commands - module talk, help, exit, ev begin, and input add - in the Tcl: Run-time job control section. But so far, you have not learned any non-BaBar tcl commands.

However, it turns out that BaBar Tcl files do not use standard Tcl commands very often - they mostly just use the special BaBar commands. So you can navigate BaBar software just fine with almost no knowledge of ordinary Tcl!

This Super-quick introduction to TCL for BaBarians should provide you with all the Tcl background you need.


sourceFoundFile: How Tcl files work together

One of the most important BaBar tcl commands is sourceFoundFile:

sourceFoundFile <tclfile>

This transfers control to the new tclfile. The interpreter will move on to tclfile and continue until it (a) runs into another sourceFoundFile, or (b) reaches the end of the tclfile. When it reaches the end, it will go back to the original tcl file, back at the location where it was sourced.

sourceFoundFile

The above figure illustrates how this works. The interpreter begins at the top of A.tcl Immediately, it is told to sourceFoundFile B.tcl, which sends it to D.tcl. Once at D.tcl, it finally gets its first non-sourceFoundFile command: command 1. It reaches the end of the file, and then returns back to where it left off: in B.tcl, right after "sourceFoundFile D.tcl". Then it obeys command 2. After command 2, another sourceFoundFile sends it to E.tcl. And so on.

If you follow the trail of sourceFoundFiles, you will find that the net result is that the commands are issued in the order: 1-2-3-4-5-6-7-8-9-10.

A real BaBar analysis job involves hundreds of tcl scripts, and the sourceFoundFile series can be many layers deep. If you are reading BaBar code, navigating through all these sourceFoundFiles can be confusing.

The good news is that analysis packages are set up so that (usually) the only Tcl files you need to worry about are the ones in your package. These Tcl files will often sourceFoundFile Tcl files from other packages. But these other Tcl scripts will just run standard physics sequences common to all analyses, that have been run and tested many times before.

With that in mind, let's look at the different types of Tcl file that show up in a BaBar package.


Tcl files in a BaBar analysis package

Tcl is much less structured than C++. In general, a Tcl command can be entered any time, anywhere. When the interpreter reaches a Tcl command, it executes it immediately. It doesn't care what file it is in.

However, by BaBar convention the Tcl files in BaBar packages can usually be classified by "type" according to their function. These include:

  • The main Tcl file for the package
  • Tcl files that talk to modules and set their parameters
  • Tcl files that set up networks of paths and sequences
  • "Snippet" Tcl files used to run analysis jobs.
  • Tcl files that define procedures (procs)
  • and more...

The main Tcl file

The most important Tcl file is the main Tcl file. The main Tcl file controls the analysis job.

In a typical analysis package (call it UserPackage), the main Tcl file performs a number of tasks:

  1. Define and set default values for important run-time variables (FwkCfgVar).
  2. Add input collections.
  3. Run standard physics sequences (via sourceFoundFiles to other packages)
  4. Perform other UserPackage tasks if required.
  5. Process events ("ev beg").

The main Tcl file is the first and last Tcl file to be read. Referring back to the diagram above, "A.tcl" would be the main Tcl file for the analysis job. The main Tcl file will source other Tcl files, and control may travel far away from it for a long time. But eventually the sourceFoundFile path will be exhausted, and control will return to the main Tcl file. Then, with everything set up, the main Tcl file begins the job with the "ev begin" command.

A package may come with more than one main Tcl file. This gives the user a choice. But you use only one main Tcl file in a given analysis job. (They do not ever sourceFoundFile each other, for example.)

MyMiniAnalysis.tcl: The prototype main Tcl file

BetaMiniUser's MyMiniAnalysis.tcl is a very good example of a main Tcl file. It contains detailed comments explaining what it is doing. Here is an annotated copy of MyMiniAnalysis.tcl, with comments in blue. Open it in another window so that you can refer to it as you read through this section.

Note that this is the original version of MyMiniAnalysis.tcl, not the modified Quicktour version. The main difference is the absence of the ParmExample module, and the "ev begin" and "exit" commands at the end of the file, as you will see below.


Step 1: FwkCfgVar and snippet Tcl files

The first task that the main Tcl file performs (after sourcing some standard setup scripts) is to define and set default values for FwkCfgVar.

FwkCfgVar ("Framework Configuration Variables") are job-configuration variables. They belong to the main Tcl file, not a module. So you set them directly, not via "mod talk."

FwkCfgVar are defined in the main Tcl file only. The syntax to define a FwkCfgVar is:


# Beginning of MainTclFile.tcl:

FwkCfgVar  Var1  Default1;    # FwkCfgVar with a default value
FwkCfgVar  Var2;              # FwkCfgVar with no default value
FwkCfgVarRequire  Var3;       # Variable must be set, or job will fail

The purpose of FwkCfgVar is to allow the user to configure analysis jobs in a "snippet" Tcl file. You make one snippet Tcl file per job. The snippet Tcl file sets the values of the FwkCfgVar defined in the main Tcl file.

Here is what the snippet Tcl file for MainTclFile.tcl would look like:


# snippet.tcl

set Var1 Value1
set Var2 Value2
set Var3 Value3

sourceFoundFile UserPackage/MainTclFile.tcl

The snippet Tcl file ends with a sourceFoundFile of the main Tcl file. The command to run the analysis job is then:

workdir> WhateverApp snippet.tcl
where WhateverApp is the name of the executable (for example, BetaMiniApp).

For example, in the Quicktour, the main Tcl file was MyMiniAnalysis.tcl. As you can see, the FwkCfgVar for MyMiniAnalysis.tcl are:

FwkCfgVar BetaMiniReadPersistence Kan
FwkCfgVar levelOfDetail "cache"
FwkCfgVar ConfigPatch "MC"
FwkCfgVar NEvent
FwkCfgVar BetaMiniTuple "root"
FwkCfgVar histFileName "MyMiniAnalysis.root"
FwkCfgVar MyMiniQA

Whoever made the snippet Tcl file looked in MyMiniAnalysis.tcl file to find out what were the available FwkCfgVar, and then put the ones he/she wanted to control on a job-by-job basis in snippet.tcl:


## snippet.tcl
## snippet Tcl file for the Workbook

## To run BetaMiniApp with this tcl snippet, go to workdir and type:
## BetaMiniApp snippet.tcl

# set some important parameters
set ConfigPatch MC
set levelOfDetail cache
set BetaMiniTuple root
set histFileName myHistogram.root

# Now send the job to the main Tcl file, MyMiniAnalysis.tcl
sourceFoundFile BetaMiniUser/MyMiniAnalysis.tcl

The advantage of this snippet setup is that it separates the basic physics analysis configuration in the main Tcl file, from the job configuration in the snippet file. That way you do not have to edit MyMiniAnalysis.tcl every time you run a new job.

This is particularly useful for input/output. For example, the name of the output file is defined by the histFileName variable. Imagine you have three jobs to run. Then you would make three snippet Tcl files, and set histFileName to a different value for each job:


#---------------------------------
# snippet-1.tcl
set histFileName frodo-1.root
#---------------------------------

#---------------------------------
# snippet-2.tcl
set histFileName frodo-2.root
#---------------------------------

#----------------------------------
# snippet-3.tcl 
set histFileName frodo-3.root
#----------------------------------
Then you will get three output files: frodo-1.root, frodo-2.root, and frodo-3.root. On the other hand, if histFileName is the same for each job, you will get only the output file from the last job run, because later jobs will overwrite earlier jobs.

In summary: To set up an analysis job in the analysis package UserPackage, you need to:

  • Locate the main Tcl file in UserPackage, MainTclFile.tcl
  • Look in MainTclFile.tcl to find the FwkCfgVar
  • Make a snippet Tcl file for each analysis job.
  • In each snippet:
    • Set the FwkCfgVar
    • sourceFoundFile UserPackage/MainTclFile.tcl
  • Run the job with the command:
    WhateverApp snippet.tcl

Step 2: Input data.

The input of data or MC collections is independent of other steps, so it can be performed anywhere before the "ev begin" command.

Let's look at what MyMiniAnalysis.tcl's comments have to say about the input stage:

##
##  You can enter input collections two ways: either append them to a list, or
##  explicitly enter them in the input module. Do one or the other, BUT NOT 
##  BOTH.
##  If inputList is set before executing btaMini.tcl, that will automatically
##  add the collections to the appropriate input module, otherwise make sure you
##  talk to the right one.
##
## lappend inputList collection1 collection2 ...
##
##  OR THE FOLLOWING (choose the correct one based on persistence)
##
## talkto BdbEventInput {
## talkto KanEventInput {
##    input add collection1
##    input add collection2
##    ...
## }
So far, you have always entered input collections by talking to the input module, KanEventInput:

#---------------------------------------------------------------
## MyKanEventInput.tcl

mod talk KanEventInput
input add /store/SP/R18/001237/200309/18.6.0b/SP_001237_013238
exit
#---------------------------------------------------------------

This is the second method mentioned above. ("talkto" is a proc defined by the talkto.tcl script. It is basically equivalent to "mod talk.")

You can perform the equivalent action using the lappend inputlist syntax. Let's make a Tcl file that does that:


#-----------------------------------------------------------------------
# MyInputCollection.tcl

lappend inputList /store/SP/R18/001237/200309/18.6.0b/SP_001237_013238
#------------------------------------------------------------------------

This adds the collection to the input list. The end result is the same either way, whether you talk to KanEventInput, or use the lappend inputList command directly from the Framework.

With these Tcl files in hand, you can add the input collection via a sourceFoundFile of either one of these scripts:

sourceFoundFile BetaMiniUser/MyKanEventInput.tcl
              OR 
sourceFoundFile BetaMiniUser/MyInputCollection.tcl

Later in this section you will learn how to use the BbkDatasetTcl utility to generate special Tcl files full of "lappend inputList" commands, so that you no longer have to create your own. You will never have to type collection names by hand again!


Step 3: Run standard physics routines

There is a standard set of physics routines that are common to all or most analyses. These are the basic routines that (among other things) create the run-time lists of particle candidates. The main Tcl file runs these sequences via a sourceFoundFile that sends control to the Tcl files responsible for these routines.

Here is a schematic diagram of how this task is delegated by MyMiniAnalysis.tcl:

------------------------------------------------------------------------------

MyMiniAnalysis.tcl    --->  btaMini[ ].tcl   --->  BetaMiniSequence.tcl
                                                   BetaMiniPhysicsSequence.tcl

  (BetaMiniUser)            (BetaMiniUser)          (BetaMiniSequences)

-------------------------------------------------------------------------------

The main physics routines are run by the Tcl files in the BetaMiniSequences package. Once all the sourceFoundFiles for those routines have been used up, control returns to the main Tcl file.


Step 4: Other UserPackage tasks

After running the standard physics routines, the main Tcl file turns to UserPackage tasks. Examples of typical UserPackage tasks include:

  • Enable/disable UserPackage modules
  • Create and append modules to UserPackage paths and sequences
  • Talk to modules, and set their run-time parameters

Usually, these tasks are delegated to other Tcl files in the package, and run via a sourceFoundFile. But shorter tasks may be included directly in the main Tcl file, if they are not complex enough to warrant a separate Tcl file.

(Remember that the Tcl interpreter does not care what file a Tcl command is issued in. The effect is the same whether you enter the command directly, or put it in a script and sourceFoundFile it.)

Let's take a look at the types of Tcl files that MainTclFile uses to perform its tasks:

Module Configuration

Some Tcl files are devoted to the task of talking to modules and setting their parameters. They look like this:


# MyModuleSetup.tcl

mod talk MyModule   
Par1 set x1
Par2 set x2
Par3 set x3
exit

Many scripts use "talkto" instead of "mod talk":


# MyModuleSetup.tcl

talkto MyModule {
Par1 set x1
Par2 set x2
Par3 set x3
exit
}

Your ParmExample.tcl file is a good example of a parameter-setting Tcl file.

Parameter-setting Tcl files are often created for selector modules, or modules that make particle candidate lists. This is because the run-time parameters for such modules are usually a list of cut values for the selector variables.

Some good examples of parameter-setting modules include:

Path or sequence setup

Some packages include Tcl files dedicated to setting up networks of paths and sequences. They create and append modules to sequences, append sequences to sequences, append sequences to paths, and so on.

Many types of sequences are worthless if the order of the modules is wrong or if some modules of the sequence are not enabled. To maintain this coherence there is often a script file for each sequence. There is normally a one to one correspondence between scripts and sequences, but it keep in mind that the two are not the same.

A typical sequence-creating Tcl file looks something like this:


#---------------------------------------------------------------------------
# MySequence.tcl
sequence create MySequence

sequence append MySequence Module1
sequence append MySequence Module2
sequence append MySequence Module3

sourceFoundFile SomePackage/Sequence1.tcl   # Sequence1.tcl creates Sequence1
sequence append MySequence Sequence1
#--------------------------------------------------------------------------

#--------------------------------------------------------------------------
# Sequence1.tcl
sequence create Sequence1

sequence append Sequence1 Module4
sequence append Sequence1 Module5
#--------------------------------------------------------------------------

MySequence.tcl creates a sequence MySequence, and appends three modules to it. Then it sources a similar Tcl file that creates another sequence, Sequence1, with two modules. Then, when control returns to MySequence.tcl, it appends Sequence1 to MySequence.

The resulting analysis path would look something like this:

------------------------------------------------------------------------------
                                  MySequence
			               |
                  -------------------------------------------
                  |            |          |                  |
                  |            |          |              Sequence1
                  |            |          |                  |
ANALYSIS PATH: Module1 --> Module2 --> Module3 -->  Module 4 --> Module5

-------------------------------------------------------------------------------

Note that ultimately, what matters is the order of the modules (in this case, 1-2-3-4-5), not the upper-level sequence structure. But sequences make it easier for programmers to organize their code.

Here are some good examples of sequence-creating Tcl files:


Step 5: Process events

Now everything has been set up: the analysis path is set, the parameters of all the modules are set, the input collections have been added to the input list. It is time to give the modules some events to analyze. MyMiniAnalysis.tcl starts the event processing loop with the command:

if [info exists NEvent] {
  ev begin -nev $NEvent
} else {
  ev begin
}

NEvent is one of the FwkCfgVar defined earlier in the file. The user can set NEvent in the snippet Tcl file for the job. Then the job will run over NEvent events. If NEvent is not set (the "else" case), then MyMiniAnalysis.tcl uses the default command, "ev begin", which runs over all events on the input list.

Finally, the main Tcl file ends the job and exits the Framework:

exit

This is the last command. The job is over.

Because the "exit" command exits the Framework, if a tcl file contains the "exit" command then you will not get a Framework prompt. The job will run without stopping. To get a Framework prompt you have to remove the "exit" command.

In the Quicktour version of MyMiniAnalysis.tcl, both the "ev begin" and "exit" commands were removed so that you could practice talking to the Framework.

Example 6: A non-interactive job controlled by Tcl files

In this example, you will use what you have just learned to set up two analysis jobs that run without stopping.

As a first step, you will start over with a copy of the original MyMiniAnalysis.tcl, not the Quicktour version. This version of MyMiniAnalysis.tcl contains the "ev begin" and "exit" commands, so it will run without stopping. Copy MyMiniAnalysis.tcl to your BetaMiniUser directory:

> ana41> cp $BFROOT/www/doc/workbook/examples/ex6a/MyMiniAnalysis.tcl BetaMiniUser/

(You could also copy it straight from $BFROOT/dist/releases/analysis-41/BetaMiniUser/, but then you will probably have to change the file permissions with unix's chmod command before you will be allowed to edit it.)

Add the ParmExample module as before:

path append Everything ParmExample

Since ParmExample is an improved version of QExample, we won't include QExample this time. Since we don't add QExample to the analysis path, it won't be run. But let's disable it just to be safe:

module disable QExample

Next, configure ParmExample:

sourceFoundFile BetaMiniUser/ParmExample.tcl

To add the input collection, you could use MyKanEventInput.tcl or MyInputCollection.tcl. Let's try the new one, MyInputCollection.tcl:

sourceFoundFile BetaMiniUser/MyInputCollection.tcl
Now your MyMiniAnalysis.tcl file should look like this:

MyMiniAnalysis.tcl with ParmExample

(For the sake of logical organization, I put the ParmExample sourceFoundFile in the UserPackage task section, and the InputCollection sourceFoundFile in the input section. But in principle the tasks performed by these scripts are independent of the others, so you can put them anywhere before the "ev begin" command.)

Next, let's make two snippet files for two different jobs:


#---------------------------------------------------
# snippet-40.tcl
set NEvent 40
set histFileName ParmExample-40.root
set levelOfDetail cache
set ConfigPatch MC
set BetaMiniTuple root

sourceFoundFile BetaMiniUser/MyMiniAnalysis.tcl
#---------------------------------------------------

#---------------------------------------------------
# snippet-80.tcl
set NEvent 80
set histFileName ParmExample-80.root
set levelOfDetail cache
set ConfigPatch MC
set BetaMiniTuple root
sourceFoundFile BetaMiniUser/MyMiniAnalysis.tcl
#--------------------------------------------------

Put these snippet files in workdir.

Now you are ready to run the jobs. First, if you have not done so already:

ana41> srtpath <enter> <enter>
ana41> cond22boot
ana41> cd workdir

Run the 40-event job:

ana41/workdir> BetaMiniApp snippet-40.tcl

Then run the 80-event job:

ana41/workdir> BetaMiniApp snippet-80.tcl

Running the job with Tcl files is much faster and easier than typing all the commands yourself!

When both jobs are done, you should have two new ROOT files in workdir: ParmExample-40.root and ParmExample-80.root. It is lucky that you gave them different names, or the second ROOT file would have overwritten the first one!

Now let's examine the histograms:

ana41/workdir> bbrroot

root[] TFile f40("ParmExample-40.root");
root[] f40.ls();

TFile**         ParmExample-40.root     Created for you by RooTupleManager
 TFile*         ParmExample-40.root     Created for you by RooTupleManager
  KEY: TH1F     h1d1;1  MC reco abs mtm difference
  KEY: TH1F     h1d2;1  Reco track momentum
  KEY: TH1F     h1d3;1  nTrack
  KEY: TH1F     h1d4;1  pTrack
  KEY: TH1F     h1d5;1  TagInspector Status

root[] h1d3->Draw();
root[] h1d4->Draw();

root[] TFile f80("ParmExample-80.root");
root[] f80.ls();

TFile**         ParmExample-80.root     Created for you by RooTupleManager
 TFile*         ParmExample-80.root     Created for you by RooTupleManager
  KEY: TH1F     h1d1;1  MC reco abs mtm difference
  KEY: TH1F     h1d2;1  Reco track momentum
  KEY: TH1F     h1d3;1  nTrack
  KEY: TH1F     h1d4;1  pTrack
  KEY: TH1F     h1d5;1  TagInspector Status

root[] h1d3->Draw();
root[] h1d4->Draw();

Your histograms should look like this:

Tracks-per-event histograms

Momentum histograms

Sure enough, the nTrack histogram for the 40-event job has 40 events, and the 80-event histogram has 80 events.


Input data: Tcl files from BbkDatasetTcl

In a real analysis, most users run over all events. So the thing that varies from job to job is not the number of events to process. Instead, different jobs correspond to different sets of input collections.

In the last section, you used your own homemade Tcl file to input a collection:

#-----------------------------------------------------------------------
# MyInputCollection.tcl

lappend inputList /store/SP/R18/001237/200309/18.6.0b/SP_001237_013238
#------------------------------------------------------------------------

But a real analysis involved hundreds of collections. A Tcl file for a more realistic analysis would input many collections:


#-------------------------------------------------------------------------
# MyBiggerCollection.tcl

lappend inputList collection_1
lappend inputList collection_2
lappend inputList collection_3

...

lappend inputList collection_N
#-------------------------------------------------------------------------

BaBar has a utility called BbkDatasetTcl that automatically generates Tcl files of this form. In the following section, you will learn how to use BbkDatasetTcl files to input data.

How to run on BbkDatasetTcl files

Note: You can choose to try the examples in this section, or if you prefer, you can just read this section for now, and refer back to it when you are running your real analysis.

Back in the Quicktour, you used BbkDatasetTcl to produce a Tcl file called SP-1237-Run4.tcl. This file is full of "lappend inputList collection_name" statements.

Tcl files produced by BbkDatasetTcl are designed to be used directly in BaBar analysis. It is very simple to add all of the collections in the SP-1237-Run4 data set to your job's input list:


Step 1: Add the following line to your snippet.tcl file:

set inputTclfile SP-1237-Run4.tcl

Step 2: Add the following line to your main Tcl file (in this case, MyMiniAnalysis.tcl):

sourceFoundFile $inputTclfile

This line replaces "sourceFoundFile BetaMiniUser/MyInputCollection.tcl."

Now all of the collections listed in SP-1237-Run4.tcl will be appended to the input list.

In practice, it is not practical to run over the full SP-1237-Run4 dataset at once. It is simply too large. So instead, you divide the dataset among many different Tcl files.

For the sake of a simpler example, I'll use a smaller dataset to demonstrate: SP-1237-BPCElectron-Run4-R18b:

ana41/workdir> mkdir tcl
ana41/workdir> cd tcl
ana41/workdir/tcl> BbkDatasetTcl SP-1237-BPCElectron-Run4-R18b --tcl 50000 --splitruns

BbkDatasetTcl: wrote SP-1237-BPCElectron-Run4-R18b-1.tcl (2 collections, 50000 events)
BbkDatasetTcl: wrote SP-1237-BPCElectron-Run4-R18b-2.tcl (3 collections, 50000 events)
BbkDatasetTcl: wrote SP-1237-BPCElectron-Run4-R18b-3.tcl (3 collections, 50000 events)
...
...
BbkDatasetTcl: wrote SP-1237-BPCElectron-Run4-R18b-22.tcl (3 collections, 50000 events)
BbkDatasetTcl: wrote SP-1237-BPCElectron-Run4-R18b-23.tcl (9 collections, 50000 events)
BbkDatasetTcl: wrote SP-1237-BPCElectron-Run4-R18b-24.tcl (6 collections, 13642 events)
Selected 58 collections, 1163642/167716000 events, ~0.0/pb, from bbkr18 at slac

(The same commands would work for the bigger SP-1237-Run4 dataset, but you would get more than 3000 Tcl files, so it's best not to use that as an example.)

Now you have split the input among 24 Tcl files, and stored them in the directory workdir/tcl. The "--tcl 50000" option ensures that each Tcl file has no more than 50000 events.

You will learn about BbkDatasetTcl commands and what the different dataset names mean in great detail in the Workbook's Find Data section. For now, however, we will turn our attention to a different problem: how to use and manage all these Tcl files once you have them.

Handling multiple tcl files

If you have multiple BbkDatasetTcl files, then you will need multiple snippets, one for each Tcl file. For example, in this case you will need 24 snippets. You're also going to end up with 24 log files and 24 output files. So it's a good idea to make a directory for each one:

workdir> mkdir log
workdir> mkdir snippet
workdir> mkdir output

The difference between each snippet will be the two lines (for example, for the 10th tcl file):

set inputTclfile tcl/SP-1237-BPCElectron-Run4-R18b-10.tcl
set histFileName output/SP-1237-BPCElectron-Run4-R18b-10.root
So for our MyMiniAnalysis example, the 10th snippet looks like this:

#---------------------------------------------------
# snippet-10.tcl

set inputTclfile tcl/SP-1237-BPCElectron-Run4-R18b-10.tcl
set histFileName output/SP-1237-BPCElectron-Run4-R18b-10.root
set levelOfDetail cache
set ConfigPatch MC
set BetaMiniTuple root

sourceFoundFile BetaMiniUser/MyMiniAnalysis.tcl
#---------------------------------------------------
(This time the snippet does not set NEvent, so the job will run over all events by default.)

It would be very time-consuming to make 24 snippet files yourself, so most users develop scripts that generate large batches of snippets. Here is a Perl script that generates the 24 snippets. To run it, copy it to your workdir directory:

workdir> cp $BFROOT/www/doc/workbook/examples/ex6/MultiSnippet.pl .
and then enter the command:
workdir > perl MultiSnippet.pl SP-1237-BPCElectron-Run4-R18b 24

In the last section, you put the "exit" command back into MyMiniAnalysis.tcl so that the jobs will run without stopping. You also put the "ev begin" command and all your other instructions to the Framework in MyMiniAnalysis.tcl, or in Tcl scripts sourced by MyMiniAnalysis.tcl. As a result, your jobs no longer require user input.

Now that your jobs do not require user input, you can (and should) submit them to the batch queue. This will ensure that computing resources are shared fairly between you and other users. The queue for running jobs (as opposed to compiling and liking with gmake) is (usually) the kanga queue. You send your jobs to the kanga queue with the commands (from workdir):

bsub -q kanga -o log/SP-1237-BPCElectron-Run4-R18b-1.log BetaMiniApp snippet/snippet-1.tcl
bsub -q kanga -o log/SP-1237-BPCElectron-Run4-R18b-2.log BetaMiniApp snippet/snippet-2.tcl 
...
bsub -q kanga -o log/SP-1237-BPCElectron-Run4-R18b-24.log BetaMiniApp snippet/snippet-24.tcl 

You will probably want to put these 24 commands in a script and then source it. (Any command that you enter at the command-line can be put in a unix script, or shell script, and run with the command "source <script name>".) Here is a Perl command to generate the script:

workdir> perl -e 'foreach $N (1..24) 
{print "bsub -q kanga -o log/SP-1237-BPCElectron-Run4-R18b-$N.log 
BetaMiniApp snippet/snippet-$N.tcl\n"}' > & submit.job

Again, this is one line, but has been split for formatting purposes.

This will generate a file called submit.job in workdir. If you examine it you will find that it contains all 24 of the above commands. Now all you have to do to submit your 24 jobs is source the script:

workdir> source submit.job

The system will respond with 24 messages like:

Job <840401> is submitted to queue <kanga>.

As usual, you can use "bjobs", "bpeek" and other batch commands to check the progress of your jobs.

You will probably want to develop a strategy for keeping track of your many snippets, tcl files, log files, and jobs. The CM2 tutorial provides one example of tcl-file bookkeeping.


[Workbook Author List] [Old Workbook] [BaBar Physics Book]

Valid HTML 4.01! Page maintained by Adam Edwards

Last modified: January 2008