Level 0 Event Data Processing

When invoked for a newly-ingested transfer package, the ProcessSCI.py application performs the following actions:

  • Assigns a "downlink ID" composed of the year and day-of-year of package receipt, and the ordinal "package number" in that day.
  • Creates an downlink working/output directory at a configurable location.
  • Retrieves the records of all complete datagrams resulting from the specified transfer package.
  • For each EPU, assembles a set of sequences of consecutive datagrams. Datagrams are determined to be consecutive by comparing the CCSDS sequence counter of the last packet in one datagram with the counter from the first packet of the following datagram, as well as by comparing the temporal gap between subsequent datagrams with a configurable threshold.
  • Creates retrieval-definition files (RetDef.xml files) corresponding to the datagram sequences in the output directory. These are the "chunks" that will be processed in the first stage of the level-0.5 pipeline. Note that a sequence may result in one or more chunks, based on a configurable limit to the number of datagrams in a chunk.
  • Calls the pipeline client interface to create a new stream of the level-0.5 pipeline task for the downlink/transfer package.

Event Extraction and Indexing

The initial scriptlet of the level-0.5 task creates a batch job for each chunk to execute the "getLSEChunk.exe" application from CHS/eventRet. This application writes out a ".evt" file containing the decoded/decompressed context/metaevent/EBF data for each event, and write to stdout, an "index" record for the event that includes portions of the context, along with the file offset at which the event was written. Note that since each chunk contains only data from one EPU, no merging is performed by these jobs.

Index Merging

After all extraction subtasks have completed, the output directory contains a set of .evt files with the event data for each chunk, and a corresponding set of .idx files. The next pipeline step is to run a python application called "mergeidx.py" from CHS/eventRet to perform the following actions:

  • Read all .idx files from the output directory, and for each record create an in-memory "EvtIdx" object.
  • Segregate these objects into commanded acquisitions by run-start-time.
  • Read in a list of "orphaned" event-index records from a centralized file and adopt any with matching run-start-times.
  • Operate on each acquisition's event list as follows:
    • Sort the list of events by extended sequence counter
    • Search for a synchronization point between the two EPU's, defined as follows:
      1. A sequence of four events that ping-pong between the EPU's without breaks in the datagram sequences,

    OR

      1. A sequence of four events, not all from the same EPU, but all from
        "start-run" datagrams
    • Once synchronized, create a subdirectory named for the run-start-time and open an output index file named for the run-start-time and first event-sequence counter. Write the synchronization event index records to this file.
    • Continue iterating through the list of events, writing index records to the merged index file. At each event, check for continued synchronization as follows:
      1. Ensure that this event doesn't represent a break in the datagram sequence from it's EPU.
      1. Look for the next event from the other EPU. If found, ensure that it follows the proper datagram sequence for it's EPU. If not found, check that both current datagrams are close-stop datagrams.
    • If either of these conditions is violated, declare loss of sync, close the current merged-index file, and begin searching for synch again.
    • When all events for this acquisition are consumed, append any events left over in the synchronization buffer to the global list of orphan events.
  • After all acquisitions in the downlink are consumed, write out the updated list of orphan events.

After the mergeidx.py application completes, this step harvests the list of acquisitions produced by this downlink and communicates them back to the pipeline.

Merged Event File Generation

A pipeline scriptlet uses the harvested list of acquisitions to launch batch tasks to write out merged .evt files based on the merged .idx files. These tasks execute the "writeMerge.exe" application from CHS/eventFile to read the necessary EPU-chunk .evt files and write out the results.

Pipeline II Injection

After all the merge-file-writing subtasks succeed, the final "Half-Pipe" step creates a new stream of the L1 task for the downlink, supplying as variables the fully-qualified path to the output directory, and the downlink ID.


Owned by: Bryson Lee
Last updated by: Chuck Patterson 03/12/2009