To see a list of data sets available at your site, enter the command
without arguments. (To keep it from flying by all at once, you may wish to pipe this command to less ("BbkDatasetTcl | less"), or to a file ("BbkDatasetTcl >& BbkDatasetTcl.txt").
The available datasets are also documented at the Data Quality homepage, which will send you to this page to learn more about the data sets for Release 18 (for example).
If you know the name of the dataset that you want, you can search for it with the "-l WILDCARD" option, for example:
> BbkDatasetTcl -l "Inclppbar*" BbkDatasetTcl: 14 datasets found in bbkr18 at slac:- BbkDatasetTcl: 142 datasets found in bbkr18 at slac:- Inclppbar-Run1-OffPeak-R18b Inclppbar-Run1-OffPeak-R18b-v02 Inclppbar-Run1-OffPeak-R18b-v03 Inclppbar-Run1-OffPeak-R18b-v04 Inclppbar-Run1-OffPeak-R18b-v05 Inclppbar-Run1-OffPeak-R18b-v06 Inclppbar-Run1-OffPeak-R18b-v07 Inclppbar-Run1-OffPeak-R18c Inclppbar-Run1-OffPeak-R18c-v03 Inclppbar-Run1-OffPeak-R18c-v04 Inclppbar-Run1-OffPeak-R18c-v05 ... Inclppbar-Run5-OnPeak-R18c-v05 Inclppbar-Run5-OnPeak-R18c-v06 Inclppbar-Run5-OnPeak-R18c-v07 Inclppbar-Run5-OnPeak-R22b Inclppbar-Run5-OnPeak-R22b-v02
The names of the different data sets have the following form:
SkimName-Run[1-5]-[On/Off]Peak-RXX[a/b/c] Examples: AllEventsSkim-Run5-OffPeak-R22a BchToD0KstarAll-Run4-OffPeak-R18b DmixD0ToKPiPi0-Run3-OnPeak-R22b InclPhi-Run2-OnPeak-R18c Kll-Run3-OffPeak-R22b-v02
SkimName indicates the type of data set or skim.
The most general data set available is the AllEvents data set. Data begins as signals in the different subdetectors. Then it is converted to digital format and stored in an XTC file. The XTC files are sent to the prompt reconstruction ("PR") system, which reconstructs particle candidates from the detector signals. The output of prompt reconstruction is the AllEvents dataset.
The AllEventsSkim data set is similar to AllEvents, except that each event is labeled with over a hundred tags. Tags are boolean variables (set to true or false) that indicate whether a data set has a given characteristic. For example, the Jpsitoll tag is set to true if the event contains a psi to l+l- decay, and false otherwise. The AllEventsSkim data set is created when a skim executable is run over AllEvents.
The remaining data sets are skims produced from AllEventsSkim. A skim is a subset of the data whose events all have the same value of a given tag (or tags). For example, the Jpsitoll skim is the subset of events in AllEventsSkim that have the Jpsitoll tag=true. A skim does not necessarily consist of a physical copy of events in AllEventsSkim - sometimes it consists of pointers to the skim events, instead. But "deep copy" and pointer skims look the same to the user.
The Run Cycle denotes the data-taking period, as shown in the table below.
|Run1||Feb 2000||Oct 2000|
|Run2||Feb 2001||Jun 2002|
|Run3||Dec 2002||Jun 2003|
|Run4||Sep 2003||Jul 2004|
|Run5||May 2005||Aug 2006|
|Run6||Jan 2007||Aug 2007|
|Run7||Dec 2007||Sep 2008|
RXX is the release series of the reconstruction software used to process the data. Data is initially processed with whatever release is current at the time. But later, when a new and improved release becomes available, the data is reprocessed using the new software. In general you want to use the data set that was processed using the same software as your test release. For example, in the Quicktour you used analysis-41, which is 22.1.1, a 22-release. So you would want the data sets ending in "R22".
Fortunately, BbkDatasetTcl is smart: once you have entered "srtpath" and "cond22boot" from analysis-41, it knows that you are using an 22-series release, and will list the R22 collections.
This year (2007), everyone will be using R18 and R22 data, so you will probably not have to worry about older releases unless you are continuing an older analysis.
Skims are produced at regular intervals, in skim cycles. This ensures that researchers do not have to wait too long to create or update skims to incorporate new physics ideas.
Usually several skim cycles are run for a given release. So often the release, data set name, and skim name are all the same. Therefore different versions of a skim are labeled by [a/b/c].
At the moment of writing (April 2007) the first R22 skim is in progress. This skim is called R22a, and its target deadline is May 1, 2007. So a lot of people are still using R18 datasets. For R18 there have been several skim cycles:
BbkDatasetTcl also lists the available Monte Carlo (simulated) sets. The names are similar to the data set names, of the form:
SP-1237-AllEventsSkim-Run5-R22a SP-1005-BchToD0KstarAll-Run4-OffPeak-R18b SP-998-DmixD0ToKPiPi0-Run3-R22b SP-1235-InclPhi-Run2-R18c SP-3981-Kll-Run3-OffPeak-R22b
To find the definition of a certain decay mode, for example 1237, you can use BbkSPModes:
> BbkSPModes --modenum 1237
The system will respond:
: Mode : Decfile : Generator : Filter : Run Type : Category : : 1237 : B0B0bar_generic.dec : Upsilon(4S) : : B0B0bar generic : generic decays :
To find out more about BbkSPModes, you can check the BbkSPModes web page, or type "BbkSPModes --help" at the command line.
If you use a skim of your data set, then you will want to study the same skim of your Monte Carlo set, so you can compare the two. Decay modes like 1237=B0B0bar_generic are standard decays that show up (as background) in nearly all analyses, so nearly all skims are run over decay mode 1237. However, for other decays, like 3527 = B0toD2StarPi_D2StartoD0Pi_D0toKPi.dec, the only skims available (besides the standard ones) are:
SP-3527 SP-3527-AllEventsSkim-R18[b,c,c-F2KBug,R22a] SP-3527-B0DNeutralLight-R18[b,c,c-F2KBug,R18b,R22b] SP-3527-BtoDPiPi-R18[b,c,c-F2KBug,R18b,R22b] SP-3527-Run[1,2,3,4,5]-[F2KBug,G4Bug,R22]
This probably means that mode 3557 was produced for a particular analysis that uses only the B0DNeutralLight and BtoDPiPi skims. So only the those skim (and the AllEventsSkim, from which this skim is derived) were produced.
Runs are data-taking periods, not MC production periods. However, Monte Carlo data sets are designed to reproduce the data as closely as possible, including the conditions (detector, online, parameters) at the time. So MC data sets are labeled with Run Cycles that indicate which data sets they are intended to model.
Simulated (Monte Carlo) data sets are produced in Simulation Production (SP) cycles:
SP1, SP2, SP3 = obsolete SP4 = Release 10 SP5 = Release 12 SP6 = Release 14 SP7 = none SP8 = Release 18 SP9 = Release 22
(SP7 would have been Release 16, but they decided not to produce it.)
The plain "BbkDatasetTcl" command tells you only about the collections at the Tier A site that you are logged in to. A Tier A site is a computing facility. SLAC's Tier A sites are:
Skims are assigned to each AWG. So in order to access your skim, you need to login to the Tier A site that hosts the AWG that owns your skim. Detailed information is available here:
Data Distribution page
Once you have determined which data sets you need, you can use BbkDatasetTcl to produce tcl files that tell your application how to access the data sets. The simplest form of the BbkDatasetTcl command is:
> BbkDatasetTcl DATASET
This will produce a single file, DATASET.tcl.
In practice, however, you will probably want to use a more complicated BbkDatasetTcl command:
> BbkDatasetTcl DATASET --tcl Nmax --splitruns --basename MYNAME
For example, in the Quicktour you used BbkDatasetTcl to produce a single tcl file:
> BbkDatasetTcl SP-1237-Run4
However, if there are too many events in your tcl file, your job may fail due to CPU time limits. To avoid this, you will want to divide the events among many tcl files:
> BbkDatasetTcl SP-1237-Run4 --tcl 100k --splitruns --basename MC-B0B0bar-Run4
Now instead of one big tcl file, you have many tcl files of 100k events each: MC-B0B0bar-Run4-1.tcl, MC-B0B0Bar-Run4-2.tcl, MC-B0B0bar-Run4-3.tcl ... MC-B0B0bar-Run4-N.tcl.
You can determine how many events you should include in one tcl file by submitting a test job and seeing when it crashes.
Note that all datasets evolve continuously with time. This is due to new data being added, old data being found to be bad, or reprocessing.
For more information about how to use BbkDatasetTcl to produce tcl files, see the Bookkeeping User Tools web page (in particular the section on "Evolving Datasets" gives details on how to update an analysis when new data becomes available), or type "BbkDatasetTcl --help" at the command line.
If you look in your SP-1237-Run4.tcl file from the Quicktour, you will see a bunch of lines like this:
# 138000/138000 events selected from 69 on-peak runs, added to dataset at 2005/11/04-22:50:14-PST, lumi = ~0.0/pb lappend inputList /store/SP/R18/001237/200309/18.6.0b/SP_001237_013238 # 48000/48000 events selected from 24 on-peak runs, added to dataset at 2005/11/05-04:48:59-PST, lumi = ~0.0/pb lappend inputList /store/SP/R18/001237/200309/18.6.0b/SP_001237_013240 # 6000/6000 events selected from 3 on-peak runs, added to dataset at 2005/11/05-04:48:58-PST, lumi = ~0.0/pb lappend inputList /store/SP/R18/001237/200309/18.6.0b/SP_001237_013270 # 138000/138000 events selected from 69 on-peak runs, added to dataset at 2005/11/03-22:47:54-PST, lumi = ~0.0/pb lappend inputList /store/SP/R18/001237/200309/18.6.0b/SP_001237_013286 # 104000/104000 events selected from 52 on-peak runs, added to dataset at 2005/11/03-22:47:54-PST, lumi = ~0.0/pbThe lines in green begin with a "#", which means that they are comments:
The lines in red are tcl commands. "lappend inputList" is a tcl command that adds a collection to the input module's list of collections to be analyzed.
Tcl files produced by BbkDatasetTcl can be used directly in your analysis. The Workbook's Tcl files section explains how to do this.
The last thing to look at are the collection names themselves.
Before doing that, you should be aware that a collection in the event store isn't really the same thing as a file - any given collection is stored in lots of files, and each file has parts of multiple collections in it. They also get moved around from time to time balance loads, they come and go from tape, etc. So it's not really useful to track down where the files live in the Unix filesystem. Instead, you should use BbkDatasetTcl to generate tcl files, and then have a Framework input module convert those to file locations.
That said, let's look at a collection name. The first collection in SP-1237-Run4.tcl is:
For real data the collection names are a bit different. For example, consider the real-data collection:
The point of the "/0001/81/" format is to make it easier for users to find the collections they want. For example, someone who wanted all runs beginning with "0001" could just look in the directory "/store/PR/R18/AllEvents/0001/".
For skim collections, BaBar skims many different (PR or SP) collections and then merges the output. Collections from these skims begin with "PRskims" or "SPskims." There are also collection that begin with "SPruns". SP data is initially generated in the SPruns tree and merged into the SP tree. The SPruns files are then deleted. The collection names for PRskims, SPnames, and SPruns also have their own special conventions. They are not described here, but you can learn about them by following the links below.
For a more detailed explanation of collection names, refer to the Extended Collection Names part of the CM2 introduction, and the link to the RFC provided on that page.
Most users do NOT ever have to make their own collections.
However, for advanced users who do want to make their own collections, walk-through examples are provided in the Workbook section Gen/Sim/Reco.
and the system responded:
Setting OO_FD_BOOT to /afs/slac/g/babar-ro/objy/databases/boot/physics/V9/ana/0202/BaBar.BOOT
"/afs/slac/g/babar-ro/objy/databases/boot/physics/V9/ana/0202/BaBar.BOOT" is a "BOOT file". Whenever a new database is created, a BOOT file is created as well. The BOOT file tells applications like BetaMiniApp how to find the database.
In general, if you are using Release XX software, then the command that you should use is condXXboot. (Note: The release name should begin with a number, like 22. If it begins with a word, like analysis-41, then it is a nickname --- so don't use cond41boot!)
(In pre-Release-12 analyses, the boot commands were a bit different: physboot or data12boot for data, and simuboot or mc12boot for MC. But it is very unlikely that you will need these commands.)
If you are curious, you can check out the contents of the boot file:
> cat /afs/slac/g/babar-ro/objy/databases/boot/physics/V9/ana/0202/BaBar.BOOT BaBar.BOOT ooFDNumber=202 ooLFDNumber=65535 ooPageSize=16384 ooLockServerName=objylock05.slac.stanford.edu ooFDDBHost=objycat03.slac.stanford.edu ooFDDBFileName=/objy/databases/production/dynamic/physics/V9/0202/BaBar.FDB ooJNLHost=objyjrnl02.slac.stanford.edu ooJNLPath=/objy/databases/production/journals/physics/V9/ana/0202/
Basically, the BOOT file sets up the paths to the database. But you do not need to know about these paths. All that you need to do is use the correct condXXboot command before you run your analysis.
To determine the luminosity of your (real) data set, you can use BaBar's bookkeeping tool BbkLumi.
BbkLumi -ds DATASET BbkLumi --tcl inputfile.tcl
For example, suppose you are using the dataset AllEvents-Run2-R18b. Then the command:
BbkLumi -ds AllEventsSkim-Run2-OnPeak-R18b
prints the luminosity of the full dataset:
Failed on dbname : bbkr14 trying bbkr18 Using aliases: AllEventsSkim-Run2-OnPeak-R18b Using B Counting release 18 from dataset name AllEventsSkim-Run2-OnPeak-R18b ============================================== Run by penguin at Sun Apr 22 21:05:18 2007 First run = 18190 : Last Run 29435 == Your Run Selection Summary ============= ***** NOTE only runs in B-counting release 18 considered ***** ***** Use --OPR or --L3 options to see runs without B-counting ***** Number of Data Runs 5150 Number of Contributing Runs 5150 ------------------------------------------- Y(4s) Resonance ON OFF Number Recorded 5150 0 == Your Luminosity (pb-1) Summary ========= Y(4s) Resonance ON OFF Lumi Processed 61145.302 0.000 == Number of BBBar Events Summary ========= Number | ERROR | (stat.) (syst.) (total) Total 67472454.3 | 43793.4 742197.0 743487.9 ==For On / Off subtraction====== Nmumu(ON) = 29566811.0 +/- 5437.5 (stat) Nmumu(OFF) = 0.0 +/- 0.0 (stat) Nmh(ON) = 214220114.0 +/- 14636.3 (stat) Nmh(OFF) = 0.0 +/- 0.0 (stat)
(Don't worry about the "failed on dbname" message - that's just BbkLumi realizing that it should be using the R18 (bbkr18) database instead of the R14-R16 (bbkr14) database.)
Alternatively, you could use BbkLumi to find the luminosity for the tcl file that you produced with BbkDatasetTcl.
BbkLumi --tcl AllEvents-Run2-R18b.tclThe output message will be similar to the one above.
The --tcl option can be useful if some of your jobs fail. As long as your tcl files were produced without the --splitruns option, you can use the --tcl option to obtain the luminosity of successful jobs only, (whereas the --ds option tells you the luminosity of the full data set only).
For example, imagine that you have divided the AllEvents-Run2-R18.tcl data set among 50 tcl files (without the --splitruns option) and submitted 50 jobs. 40 of them run with no problems, but 10 of them keep failing no matter what you do. So you decide not to use the data from those 10 tcl files.
Now you need to know the luminosity of the 40 tcl files that produced successful jobs, but not the 10 that failed. So you create two directories, "success" and "failed", and move the 40 successful tcl files to "success" and the failed tcl files to "failed." Then you go to "success" and run BbkLumi:
> cd success success> BbkLumi --tcl *
This will give you the luminosity of the data from the 40 successful jobs only.
As mentioned above, the --tcl option works only on tcl files that were NOT produced with the --splitruns option. With --splitruns is run BbkLumi cannot tell exactly which runs are in a given tcl file or if the whole run or part of it is.
On the other hand, --splitruns is useful because without it the tcl files each have a variable number of events, and this leads to a rather random amount of time in the batch system. Most people use --splitruns to ensure that all of their jobs use about the same amount of batch time.
BbkLumi can also be used to obtain the luminosity for a particular series or range of runs, for example:
BbkLumi --range 38358-38363 : Gets lumi between runs 38358 and 38363 BbkLumi --run 38358,38363,38451 : Gets lumi for the listed runs
You will probably not need to use these options, but they are there just in case.
At some point in your analysis you will probably need to determine the "luminosity" of your Monte Carlo samples. You would use this information to scale your Monte Carlo set to your data set.
For example, suppose you have a real data set of 100/fb, and a Monte Carlo set of 300/fb. Then you would need to rescale the Monte Carlo set by a factor of 1/3 in order to make direct comparisons with data.
The luminosity that you want is actually the equivalent luminosity, the luminosity that a generic real data sample filled with all types of decays would have to have in order to contain the type and number of decays in your Monte Carlo sample. For example, if you have a Monte Carlo sample of 90,000,000 e+e- to tau tau decays, you would want to know what size data sample would contain 90,000,000 e+e- tau tau decays.
In general, the equivalent luminosity of an MC sample of N events and cross section sigma is:
lumi = N / sigma
For our e+e- tau tau example, the equivalent luminosity is:
lumi = 90,000,000/0.90nb = 100,000,000/nb = 100/fb
(Be careful with your units!)
Note that due to detector acceptance, the effective cross section is sometimes lower than the actual production cross section. For example the theoretical cross section for e+e- to tau+tau- is 0.94nb, but the cross section from the detector is 0.90nb.
If you are using your e+e- to tau tau sample to model the e+e- to tau tau part of a real data sample of 200/fb, then you'd need to rescale your e+e- to tau tau sample by a factor of 2.
General related documents: