INDEX
*  SPIRES Technical Notes
+1  A Brief Explanation About this Document
1  Sorting Records in SPIRES; SPISORT
1.1  An Overview of Record-Sorting Methods
1.2  Sorting Records with SPISORT
1.3  Creating the SPISORT Input File
1.3.1  The DEFINE SET Command
1.3.2  The GENERATE SET Command
1.4  Sorting the Input File: the SPISORT Command
1.4.1  SPISORT Error Codes
1.4.2  (*) Running a Batch SPISORT Job
1.5  Processing the SPISORT Output File: the FOR SET Command
1.6  The SHOW SET INFORMATION (SHO SET INF) Command
1.7  Filters and SPISORT
1.8  Direct Sets
1.8.1  Direct Sets and the DEFINE SET Command
1.8.2  Direct Sets and the FOR SET Command
1.8.3  Direct Sets and Filters
1.9  Display Sets
2  An Alternate Form of the Subfile Name
3  Sharing Records in SPIRES System Subfiles with Other Users
3.1  The METACCT Subfile
3.2  Using METACCT Privileges: The SET METACCT Command
3.3  Using VERSION-STR to Track Copies of System File Records
4  Object Code Management: The ZAP Subfile-name Commands
4.1  The ZAP FORMAT Command
4.2  The ZAP RECDEF Command
4.3  The ZAP SYS PROTO Command
4.4  The ZAP VGROUP Command
4.5  The ZAP STATIC Command
5  Multiply Defined Elements
5.1  The Throw-away Element: The "-" Element in SPIRES Subfiles
6  Storing DECLARE Data for Multipurpose Use
6.1  Declare Data Subfiles
6.2  Using Your Stored Declared Data
7  Output Control
7.1  The Output Control Declaration
7.2  Using Output Control
7.3  DATA MOVE Processing
7.3.1  DATA MOVE for Subfile Output
7.3.2  DATA MOVE for Table output
7.4  DECLARE ELEMENT SUBFILE Description
7.5  DATA MOVE using PERFORM TABLE CREATE processing
8  External Files
8.1  External File Record Processing
8.2  External File Data Declaration -- the EXTERNAL subfile
8.3  External File Control Data
8.4  External File Data Declation -- Examples
9  Change Generation
9.1  The Change Generation Procedure
9.2  The Changes Subfile
10  XEQ DATA Processing
10.1  The Meta-Data record structure and XSEMPROC Actions
10.2  The SET XEQDATA Command
10.3  The XEQ DATA Command
10.4  XsemProc Action Descriptions
10.5  Xeq Data Sample Meta-Data Subfile Definition
10.6  Xeq Data Sample Protocol
11  Input Control
11.1  The Input Control Declaration
11.2  Using Input Control
11.2.1  Input Control commands -- Sample 1: Declare Tables
11.2.2  Input Control commands -- Sample 2: Declare Input Tables
11.2.3  Input Control commands -- Sample 3: Input RECDEF definitions
11.2.4  Input Control commands -- Sample 4: Input Control protocol
11.2.5  Input Control commands -- Sample 5: Processing
12  The REFERENCE Command, Partial FOR
12.1  Introduction; the REFERENCE Command
12.2  Record Navigation
12.3  Partial Processing Commands
12.4  Partial Processing UPDATE and MERGE Capabilities
12.5  The SHOW LEVELS Command
12.6  The FOR * command
12.7  Partial Processing to the Rescue
12.8  Using the INCLOSE Command to Close-Out a Referenced Record
13  I/O Monitoring Commands In SPIRES
13.1  The SET SINFO (SET SIN) Command and the "S" Parameter
13.2  The SET NOSINFO (SET NOSIN) Command
13.3  The SHOW SINFO (SHO SIN) Command
13.4  The CLEAR SINFO (CLE SIN) Command
13.5  The SHOW FILE COUNTS and SET FILE COUNTS Command
13.6  The SHOW SUBFILE INFORMATION Command
14  Path Processing
14.1  The Primary Path
14.2  Establishing Alternate Paths
14.3  Alternate Subfile Paths
14.4  Alternate Format Paths
14.5  Alternate VGROUP Path
14.6  Establishing a Subfile Path as the Default Subfile
14.7  Clearing Paths
14.8  Obtaining Information on Paths
14.9  Examples of Path Processing
14.10  Simultaneous Transfers and References in Paths
14.11  The CLEAR SUBGOALS (CLR SUBG) Command
15  Maintenance and Debugging Commands
15.1  DUMP BLOCK Command
15.2  FIX BLOCK Command
15.3  DEBUGGING COMMANDS, EMULATOR
15.4  DUMP RECORD Command
15.5  Object Deck Maintenance
16  Setting Locks in SPIRES
16.1  Setting Attach Locks in SPIRES
17  Subfile Tables
17.1  Establishing a Subfile Table: The DECLARE TABLE Command
17.2  Using a Subfile Table
17.3  Establishing a Subfile Input Table: The DECLARE INPUT TABLE Command
18  Packed Decimals in SPIRES
18.1  General Information on Packed Decimals
18.1.1  The Components and Terminology of Packed Decimals
18.1.2  The Input and Output Forms of Packed Decimal Values
18.1.3  Using Packed Decimals
18.2  Packed Decimals as Data Elements
18.3  Packed Decimal Variables and Arithmetic
18.4  Arithmetic with Packed Decimal Values
18.5  Handling Other Types of Numeric Data
19  Edit Masks
19.1  Numeral Representation: "9", "Z", and "*"
19.2  Decimal Point Indication: "."
19.3  Sign and Currency Symbols: "+", "-", "CR", "DB" and "$"
19.4  Floating Characters: "+", "-" and "$"
19.5  Insertion Characters: " ", "B", "0" and ","
19.6  Formal Edit Mask Syntax
20  Dynamic Elements
20.1  The DEFINE ELEMENT (DEF ELE) Command
20.2  Secondary Elements as Multiple Occurrences of a Dynamic Element
20.3  Declared Elements: Dynamic Elements With Element Definitions
20.4  Using Dynamic Elements
20.5  Dynamic Elements with Formats
20.6  Dynamic Elements and WHERE Clauses
20.7  Errors in Using Dynamic Elements
21  Element Filters
21.1  The SET FILTER Command
21.1.1  Setting Additional Filters: The SET FILTER OVERLAY command
21.2  Filter Capabilities and Limits
21.3  Showing and Clearing Filters
21.4  Filtering Elements by Occurrence Number
21.5  Filters and Dynamic Elements
22  IF-Testing in Record Input
23  Phantom Structures
23.1  Coding Phantom Structures
23.2  Capabilities and Restrictions of Phantom Structures
23.3  Defining Phantom Structures Dynamically with DEFINE ELEMENT
24  Temporary SPIRES Files
24.1  DECLARE FILE
24.2  DECLARE SUBFILE subname
25  The WITH DATA Option: Input Data as Part of the Command
26  DECLARE EXTERNAL DATA
27  Hierarchical Records in Multiple Table Output: DEFINE TABLE
27.1  The DEFINE TABLE Command
27.2  The GENERATE TABLES Command
27.3  The CLEAR TABLES Command
27.4  The SHOW TABLES Command
28  The PERFORM Commands
28.1  The PERFORM PRINT Command
28.1.2  Error messages
28.1.3  Printing from Protocols Files
28.2  The PERFORM PUBLISH Command
28.3  The PERFORM BUILD Command
28.3.1  The PERFORM BUILD PROTOCOLS Command
28.3.2  PROTOCOLS File Definition
28.3.3  $PROTOCOLS format
28.4  The PERFORM FILEDEF SUMMARY Command
28.5  The PERFORM FORMAT LIST Command
28.6  PERFORM SYSTEM PRINT Command
28.6.1  FONT TABLE
28.6.2  FORMATTING-CHATACTER TABLE
28.6.3  DESTINATION ATTACHED and POSTSCRIPT
28.7  PERFORM SYSTEM SEND Command
28.8  PERFORM SYSTEM MAIL Command
28.9  PERFORM CHNGREF
28.10  The PERFORM SYSDUMP Command
29  Exporting Data from SPIRES to Other Programs: The Exporter
29.1  The Exporter Input Screens
29.1.1  The Target System Selection Screen
29.1.2  The Data Structure Selection Screen
29.1.3  The Element Specification Screen
29.1.4  The Field Layout Screen
29.2  The Exporter Command Environment
29.3  Importing Your Data
29.4  Sample Data
29.5  EMS
29.6  Full-Screen Key-Sequences
29.7  The SPIMSG Command
29.8  Command Retry
:  Appendices
:29  SPIRES Documentation

*  SPIRES Technical Notes

******************************************************************
*                                                                *
*                     Stanford Data Center                       *
*                     Stanford University                        *
*                     Stanford, Ca.   94305                      *
*                                                                *
*       (c)Copyright 1994 by the Board of Trustees of the        *
*               Leland Stanford Junior University                *
*                      All rights reserved                       *
*            Printed in the United States of America             *
*                                                                *
******************************************************************

        SPIRES (TM) is a trademark of Stanford University.

+1  A Brief Explanation About this Document

This manual is in the process of being written. There are many subjects that fit into the category of data base management that do not and should not belong to other SPIRES manuals. In fact, this manual was created when we began to realize that we did not know what other currently existing manuals should contain certain new material developed and documented in SPIRES.

This manual currently is a catch-all, a place for homeless documentation of SPIRES components. When time becomes available, it will become a structured document, organized like any other SPIRES manual, as a logically constructed reference manual. For the time being, however, it will remain as a "draft" document, with new sections added as needed in order to document the growing number of SPIRES capabilities that cross the boundaries between File Definition, Formats, Protocols and general file management.

1  Sorting Records in SPIRES; SPISORT

SPIRES has several different capabilities for sorting goal records, that is, placing records in order based on the values of one or more goal record elements. Record sorting is often desirable for reports, where you want all the records having a particular value for a given element to be displayed together, all the records having a second value to be together, and so forth.

The four methods of arranging goal records are discussed and compared in the next section. [See 1.1.] The remaining sections of this chapter will cover the most versatile of the methods, a procedure called SPISORT. [See 1.2.]

1.1  An Overview of Record-Sorting Methods

This section will discuss and compare four different ways to sort goal records. The remainder of this chapter will be about one of them, usually called SPISORT, although the actual SPISORT program is just one of its steps. [See 1.2.] The others are discussed in other SPIRES manuals; appropriate references are included here.

The four methods are:

Below, each of these techniques is briefly discussed:

The SEQUENCE command

The simplest of the four techniques to use, the SEQUENCE command sorts the records in the current stack or search result. The resulting stack can then be displayed with the TYPE command. It is commonly used in a situation like this:

The SEQUENCE command:

The SPISORT Procedure

Several steps are involved in using SPISORT. First you create a "set" of records, using the Global FOR commands DEFINE SET and GENERATE SET. Next you issue a SPISORT command to sort the records in the set. Then you may process the sorted records under the Global FOR command FOR SET.

The SPISORT procedure:

The FOR INDEX Command

With this technique and FOR SUBFILE below, you are taking advantage of "automatic sorting" done by SPIRES in maintaining the data base. The indexes of the subfile in effect sort goal records by element value. Using the FOR INDEX command with Global FOR processing commands, you can display records in order without having to sort them specially. In addition, using the SEQUENCE option on the FOR INDEX command, you can sort the records retrieved at a given index node (i.e., having the same value for the indexed element) by secondary elements, in a manner similar to the SEQUENCE command. Here is a sample session using the FOR INDEX command:

The report in the active file would display records in alphabetic order by values passed from goal records to the NAME index, which presumably contains the values from the NAME element.

The FOR INDEX command:

The FOR SUBFILE Command

If the goal records are to be sorted by the key element of the record, then no special sorting is required at all -- the records are maintained in key order in the tree. Either FOR TREE or FOR SUBFILE (to pick up deferred queue records) processing can be used to retrieve the records in that order. Since the key is unique for each record, there are no secondary elements for sorting. Of the four techniques discussed, this method is the cheapest, since it requires no special sorting.

FOR SUBFILE processing is also discussed in the Global FOR manual mentioned above.

1.2  Sorting Records with SPISORT

SPISORT is a name used to describe a three-step procedure for sorting goal records, though specifically its name refers only to the batch program used in the second step. The three steps you follow are:

Each of these steps will be described in turn. The next section will discuss the first step, in which a set for sorting is created. [See 1.3.] Following that will be a discussion of the SPISORT command [See 1.4.] followed by more information on the FOR SET command, used to process sets. [See 1.5.]

Discarding the set could be considered a significant fourth step to the procedure. Since the set is stored as an ORVYL data set on your account, it will accrue storage charges, so you should get rid of it when you no longer need it. To discard a set, issue the ORVYL command ERASE:

Multiple occurrences of an element used as a sort field cause multiple "sort records" to be created in the set. Thus, after the set is sorted, FOR SET processing may cause the same goal record to appear several times, once for each occurrence of the sort element. By default, in fact, when the goal record is displayed under FOR SET, only the occurrence of the sort element that caused the record to appear at that point in the set is displayed; the others are "filtered out" automatically. The filters can be removed, if desired. Filters add a great deal of power and sophistication to SPISORT work, and they are discussed separately later in this chapter. [See 1.7.]

By default, during FOR SET processing, SPIRES retrieves a pointer from the set for each goal record, and then fetches the goal record from the tree (or deferred queue) using that pointer. Thus, each record is, in effect, retrieved twice: once to fetch the sort data, and once to fetch the data for FOR SET processing. In some cases, you can save substantially by creating a "direct" set, in which the set itself contains all the data used for sorting and used for displaying under FOR SET processing. Direct sets and their use are discussed in a later section. [See 1.8.]

1.3  Creating the SPISORT Input File

A set is created with a combination of Global FOR commands: DEFINE SET, GENERATE SET and ENDFOR.

In addition, it tells SPIRES which elements to put into the set. [See 1.3.1.]

You must be in Global FOR mode to issue this series of commands. The Global FOR command "FOR class" initiates Global FOR, telling SPIRES which records to process. WHERE clauses and SET SCAN commands may also be incorporated into the procedure as desired. For more information about Global FOR in general (and about the "FOR class" command in particular), see the reference manual "Sequential Record Processing in SPIRES: Global FOR".

The next sections will discuss the DEFINE SET and GENERATE SET commands in detail.

1.3.1  The DEFINE SET Command

A SPISORT input file is created by the DEFINE SET command. Unsorted records are placed in it by a GENERATE SET command. Both of these commands can be issued only when Global FOR is in effect. Any global FOR class, WHERE clause, SET SCAN commands, etc., can be used to specify which records will be placed in the input file. The input file may be created only on your own account.

The form of the DEFINE SET command is

 DEFINE SET setname [REPLACE] [DIRECT [EXTERNAL] [SCAN] [ALL]] [TV=ALL]...
 ... ELEMENTS [=] element-list [+ direct-list] [- direct-list]

or

"Setname" is the name of an ORVYL file that will hold the sort input data. If this is the name of an existing file, either the CONTINUE, APPEND (the same as CONTINUE) or REPLACE options should be used. (If an existing file is named, and REPLACE is not specified, then the system will ask permission to replace the file. See the examples below for more information on the CONTINUE option.) You may use a fully qualified ORVYL data set name ("ORV.gg.uuu.name" where "gg.uuu" must be your account number) or type only the "name" portion; the "name" portion may not exceed 33 characters in length.

The DIRECT option and its accompanying "direct-list" (a list of elements) are discussed in the section on direct sets. [See 1.8.]

The TV=ALL option specifies that all occurrences of the elements named in the element-list are to be processed, unless the element includes a TV=n option to override the global TV=ALL option. Use the TV=ALL option once before the ELEMENTS keyword as an alternative to including TV=ALL as an option for each element in the element-list.

The ELEMENTS keyword signals that a list of the elements to be used for sorting follows. Here is the syntax of the "element-list" portion of DEFINE SET:

For example, to define a set where entries are sorted first by CITY and then by ADD.DATE,

Up to sixteen elements may be specified, separated by commas or blanks. The "sort portion" of a sort record, i.e., the elements in the ELEMENTS list, can be about 1000 bytes long per record. (The total length of the sort record, including direct set information, can be 5500 bytes.) When creating sort records under GENERATE SET, SPIRES will start with the value of the first element and continue till the end of the list of elements or the 1000-byte limit is reached, whichever comes first.

The elements may include elements from phantom structures, as well as dynamic elements. With dynamic elements, however, be aware that SPIRES will create the dynamic element's value for the sort file, but will otherwise know nothing about the dynamic element. If you use the set later, to use the dynamic element, you will need to define it again; and since SPIRES will again compute its value when you want to see it, it may differ from its value at the time the set was created. (This is not a problem with direct sets.) [See 1.8.1.]

Use parentheses around groups of elements to specify that several different elements should be used for sorting at a particular level. For example, to specify the primary sorting element as AUTHOR or EDITOR (whichever occurs) and the secondary sorting element as TITLE, use a command such as:

Options for Element-names

The element list is made up of a series of element-name/option pairs. There are several possible options, and all options following an individual element mnemonic must be enclosed in a single pair of parentheses. The possible options are any combination of the following:

Note that lowercase sorts before uppercase if values are not forced to uppercase. (The default is to force character string values to uppercase for sorting.)

The following additional options all have the form

where "n" is an integer. The "=" is optional, and may be replaced by a blank. The TV option is the one most commonly used; the others are used very infrequently -- their effects can be duplicated with various combinations of the SET FILTER command. [See 1.3.2.] If you are using SPISORT because you are trying to sort more records than the SEQUENCE command can handle, you need not use any of these options.

Examples of DEFINE SET Commands

Below are some sample DEFINE SET commands:

In the following examples, the element ITEM.DATE is a multiply occurring element within the multiply occurring structure ITEM:

In effect, the last three examples show that TV and TS, when used with an element in a structure, ignore structural boundaries, counting absolute occurrences of the element. Using SV and SS can re-establish those boundaries to some degree.

After defining the set, the next step is to put records into it, using the GENERATE SET command, discussed in the next section.

If you want to see the DEFINE SET command issued to create a set, issue the SHOW SET INFORMATION command. [See 1.6.]

1.3.2  The GENERATE SET Command

After the DEFINE SET command has been issued, you must tell SPIRES to place unsorted goal records into the set, which is done with the GENERATE SET command. Remember that the DEFINE SET and GENERATE SET commands can be issued only when Global FOR is in effect; GENERATE SET can be issued only after a DEFINE SET command has been issued.

The GENERATE SET command works in Global FOR like many other commands such as DISPLAY, REMOVE, etc. Its syntax is:

The first options indicate which records in the Global FOR class are to be processed by the command. If no option is specified here, ALL is assumed; note that the default, ALL, processes all of the records into the input file.

A typical sequence of commands to create a sort input file might be:

An ENDFOR command (or a command that causes an END OF GLOBAL FOR condition, such as CLEAR SELECT) must occur before the SET can be sorted. Note that the response shown above to the ENDFOR command gives the name of the unsorted file (the set), as well as a count of the number of "entries" in it. The number of entries may or may not be equal to the number of records processed; if you are sorting on multiple occurrences of an element, then a single record may create multiple entries in the set (see Technical Note below).

The GENERATE SET command can be issued more than once while Global FOR is in effect. When Global FOR ends, or when Global FOR begins again, the set named in the DEFINE SET command is closed and the message giving the number of entries in it is output. If more sort input data is to be added to the same set, the DEFINE SET command will have to be issued with the CONTINUE option before any GENERATE SET commands are valid. Note that you cannot specify an element list if the CONTINUE option is used; the element list of the original DEFINE SET command that created the set is used. [See 1.3.1.]

You may set element filters (using the SET FILTERS command) to limit or control the occurrences of values used to create sets. The SET FILTER command(s) you need must be issued before the GENERATE SET command. For example, if you want to sort on the last occurrence of the MOD.DATE element in each record, you could issue these commands:

Using filters as in the example, you can filter out occurrences of an element that you do not want to use for sorting.

The next step is to issue the SPISORT command, the subject of the next section. [See 1.4.]

Note that you can process the generated set under the FOR SET command even before it has been sorted, if desired. [See 1.5.]

Technical Information on the Creation of Sort Entries

How does SPIRES determine the number of "sort entries" to make for a given record? Do sort entries get made for a record having no occurrences of the sort elements?

The answer to the first question depends on several factors. The most important is whether the TV option is specified on any of the elements in the DEFINE SET command. If not, there will be one sort entry for each goal record processed by the Global FOR command GENERATE SET. (For example, the command GENERATE SET 5 would presumably process five goal records and create five sort entries.)

If any of the sort elements do not have a value in a record, SPIRES essentially assigns a null value to the element for sorting purposes. Even if none of the sort elements has a value, SPIRES still creates a sort entry for that record. It is important to realize that the DEFINE SET command does not determine what records deserve sort entries -- each goal record processed by the GENERATE SET command (as determined by the "FOR class" command, the WHERE clause, SET SCAN commands, and the GENERATE SET command itself) generates at least one sort entry. The DEFINE SET and SET FILTER commands may determine how many sort entries are created for each record, but at least one will be created for each goal record processed by GENERATE SET.

Returning abruptly to the question of how many sort entries are created for a given goal record, consider the effect of the TV option for an element on the DEFINE SET command. For example,

If a goal record processed by the GENERATE SET command has two occurrences of the PSEUDONYMS element, then two sort entries will be created -- one for each value. But consider this example:

From one to four sort entries will be created for each record, depending on the number of occurrences of PSEUDONYMS. (Remember, if there are no occurrences of PSEUDONYMS in a record, one sort entry will be created.)

If filters are in effect, they are applied first, before the TV, SV, TS and SS options are applied.

The number of sort entries grows substantially when two or more sort elements are multiply occurring. You can use a simple formula to determine that number for any given goal record:

Below is a simple example. Suppose that you want to know how many sort entries will be created for record ABC that has 10 each of the three sort elements named in the DEFINE SET command:

If element X were the only sort element, 10 entries would be created; if element Y, 3 would be created; and if element Z, two would be created. Multiplying them together creates the product 60.

1.4  Sorting the Input File: the SPISORT Command

To sort the input set and create a sorted output file, issue a SPISORT command. The syntax is:

The SPISORT command verb may not be abbreviated.

"Infile" (which is required) is the name of the input file created by your DEFINE SET command.

"Outfile" is the name of the output file to contain the sorted data. If you omit the "TO outfile" parameter, then your input file will be overwritten by the sorted output file.

If you do name an output file, the REPLACE parameter may be used to specify that an existing file of that name may be replaced with the new data. (If you omit the REPLACE parameter and the file already exists, you will be asked whether it's ok to replace the file.)

With the TIME option, you may increase the time allotted for the sort job. The default is 1 minute. (Note that the sort job may not run in CLASS=F with a high time limit. This could cause the sort job to queue, especially if the SPISORT command is being issued from a job (e.g. Batwyl).

The ORDER option allows you to reorder the sort elements specified in your DEFINE SET command. Only elements named in the ELEMENT list (not the "direct" list; see 1.8) may be named here. The parentheses around the order list are optional, but if they are omitted, the ORDER parameter must be last in your SPISORT command.

Each element may be followed by "(A)" or "(D)" to indicate ascending or descending order; whichever was specified for the element in the DEFINE SET command is the default.

The elements may be specified by name (the name given in the DEFINE SET command, though any structural path information should be omitted) or number, where the number is the element's position in the ELEMENT list in the DEFINE SET command (see example below).

By the way, if the element names in the DEFINE SET command are themselves numbers, then SPISORT will assume the numbers in the ORDER parameter are element names, not position numbers. Naturally, however, the possibility of confusion is much greater if you use numbers as element names in the DEFINE SET command, particularly if you might need to use the ORDER parameter. You can see what DEFINE SET was issued to create a set by issuing the command SHOW SET INFORMATION. [See 1.6.]

Examples of SPISORT commands

Here are a few examples of SPISORT commands:

The SPISORT Command Runs a Batch Job

The SPISORT command actually constructs JCL and runs a batch job on your behalf, to accomplish the sorting of your input file. For this reason, your "current job" number will be affected when you issue a SPISORT command. In addition, SPISORT makes use of the $ASK, $WDST, and $WDSR variables.

1.4.1  SPISORT Error Codes

If a SPISORT command fails, an error code will be supplied. Listed below are the possible codes for SPISORT failures. You may get online explanations, too, with the EXPLAIN command. For example:

The error code is also recorded in the $SORTCODE variable, and $NO is set to true when the SPISORT command fails.

SPISORT Error Codes and Explanations

0001  Attempt to define the same parameter twice,
      or a single parameter value exceeds 22 characters.
0002  Invalid parameter option.
0003  Either IN or OUT (or both) have not been specified.
0004  Main block of IN not correct.  Did you specify the
      DEFINE SET file name as the IN=file ?
0005  IN file does not define any data to sort.  Did you
      GENERATE SET after doing the DEFINE SET?
0006  Data blocks of IN not correct.  Did you PUT APPEND
      something to the IN=file ?
0007  OUT=file exists and REP option not specified.
0008  SORT did not output the required number of records.
0009  IN=file did not contain all the records to sort that
      it was supposed to contain.
0010  ORD field in the parm list is invalid.  Are the element
      names or numbers valid?  Did you specify a sort option
      other than "(A)" or "(D)"?
0011  IN=file did not have any sort fields.
0090  ORVYL file system unavailable.
0094  IN=file name illegal, probably invalid characters.
0096  Permanent I/O error on IN=file.
0097  Permanent I/O error on OUT=file.
0099  IN=file not available.  Did you ENDFOR before running?
0105  OUT=file not available.  Did you have it attached?
0107  IN=file access not permitted.  Is the file yours?
0108  IN=file read access prohibited.
0109  OUT=file write access prohibited.
0112  IN=file missing a required block.
0117  OUT=file storage limit exceeded.  Get more ORVYL blocks.
0119  IN=file does not exist.
0121  ORVYL out of block space.  Tell systems.
0194  OUT=file name illegal, probably invalid characters.
0198  OUT=file not available.  Do you have it attached?
0206  OUT=file access not permitted.  Is it yours?
0210  OUT=file storage limit exceeded.  Get more ORVYL blocks.
0214  OUT=file name overflows dictionary.  Tell systems.
0226  OUT=file overflows system tables.  Tell systems.
0230  ORVYL out of block space.  Tell systems.
1000  The SPISORT failed to parse correctly. You may have
      improperly spelled parameters.
1001  SPISORT could not sort your set.

1.4.2  (*) Running a Batch SPISORT Job

The interactive SPISORT command was implemented in March 1989. Prior to that time, the sorting step of the SPISORT procedure had to be accomplished by running a batch job to execute the SPISORT program. (In fact, the SPISORT command runs a batch job, constructing the JCL and submitting the job on your behalf.)

It is still possible, of course, to run the batch job yourself. The paragraphs below describe the JCL to run a batch SPISORT job.

To sort the input set and create a sorted output file, a batch job must be run using the following JCL to invoke the SPISORT program:

The parameters specify the name(s) of the input and output files, and whether an existing file is to be replaced by a new output file. The forms of the parameters are:

Here is an example using the ORD parameter. Suppose your DEFINE SET command was:

You could change the sort order later on the EXEC card:

SPISORT will sort the elements in the order WEIGHT (in ascending order), COLOR and SIZE.

By the way, if the element names in the DEFINE SET command are themselves numbers, then SPISORT will assume the numbers in the ORD parameter are element names, not position numbers. Naturally, however, the possibility of confusion is much greater if you use numbers as element names in the DEFINE SET command, particularly if you might need to use the ORD parameter. You can see what DEFINE SET was issued to create a set by issuing the command SHOW SET INFORMATION. [See 1.6.]

Issue the WYLBUR command RUN to submit the SPISORT job for execution.

If your batch job fails, a SPISORT RETURN code will be supplied. The codes are the same as those listed earlier for the SPISORT command. [See 1.4.1.] The code is prefixed with a "U" in the HASP job log messages.

1.5  Processing the SPISORT Output File: the FOR SET Command

Sets of records created by the DEFINE SET and GENERATE SET commands are processed under the Global FOR command FOR SET. That command has the following form:

or

where "setname" is the name of the ORVYL data set created by the DEFINE SET and GENERATE SET commands or is a stored result or stack. If the set is stored as an ORVYL data set under some account other than your own, then use the form "FOR SET ORV.gg.uuu.setname" where "gg.uuu" is the account number under which the set is stored. For more information about using a stored result or stack with FOR SET, see the "Global FOR" manual; online, EXPLAIN FOR SET COMMAND, IN GLOBAL FOR.

The "source" may be any of the usual Global FOR access classes, e.g. SUBFILE, TREE, UPDATES, etc. In the first form, SUBFILE is assumed to be the source.

The UNFILTERED option tells SPIRES to process the entire goal record, rather than the version whose sort elements are filtered by path occurrence information gathered when the SPISORT sort entries were created. This option is explained in detail in the next section. [See 1.7.]

The DIRECT option is used when you created a direct set (by using the DIRECT option on the DEFINE SET command). The DIRECT option tells SPIRES not to retrieve the goal record that a sort entry in the set represents, but instead to use the data actually in the sort entry. Hence, "FOR source VIA SET setname DIRECT" is exactly the same as "FOR SET setname DIRECT" -- the "source" is always the direct set when DIRECT is specified. Direct sets are discussed in a later section. [See 1.8.]

1.6  The SHOW SET INFORMATION (SHO SET INF) Command

You can get information about generated sets stored on your account by issuing the SHOW SET INFORMATION command:

You can issue this command without the "setname" option if you are processing records under a FOR SET command; if not, you must name the set you want information about.

The information displayed includes the complete DEFINE SET command you issued when creating the set, the number of sort entries in the set, the date and time it was created, etc., as you can see from the example below:

The first line shows the DEFINE SET command used to create the set. The next line shows the ORVYL data set containing the sort data, and the date, time and account that created it. The next line tells what file and subfile the data comes from. (No subfile is listed if the set was generated after an ATTACH command rather than a SELECT command.)

The next line announces whether the set has been sorted or not, with the next line telling by which elements the set was sorted. (Remember that the ORDER parameter on the SPISORT command allows you to specify a different order for sorting than appears in the DEFINE SET command.) Finally, at least in the example, the display tells you how many sort entry records were generated for the set and how many ORVYL blocks the set uses for storage.

For direct sets, discussed elsewhere, SPIRES tells you the minimum, maximum and average length of the sort entry records at the end of the display. [See 1.8.]

1.7  Filters and SPISORT

When a set is created with the DEFINE SET and GENERATE SET commands, "path information" is created to tell SPIRES how to retrieve appropriate element occurrences involved in the sorting. SPIRES uses the path information when records are processed under the FOR SET command; by default, only the particular occurrence of the element that caused the record to be sorted in that position will be retrieved when the record is displayed. (When multiply occurring elements or structures are named in the DEFINE SET command and TV other than 1 is specified, multiple copies of a single goal record in the set may be created.) This "automatic filtering" applies to formatted or non-formatted output commands.

For example, suppose a set is defined to sort records on the multiply occurring element EMPLOYEE. A record containing two employees appears twice in the set, but each time the record is displayed under the FOR SET command, only a single employee's name appears:

SPIRES automatically filtered out the occurrences of EMPLOYEE that did not cause the record to appear at that position in the set. The automatic filtering occurs regardless of whether the set is sorted or not -- the path information is created when the set is created, not by the SPISORT program.

The UNFILTERED option can be added to the FOR SET command if the automatic filtering described above is not desired:

Explicit filtering can be done with the SET FILTER command, which may be used to control the sort entries created for a goal record when the set is created. For example, compare these two sets of commands applied to the same subfile:

The command sequences are identical, except for the SET FILTER command in the second example, which tells SPIRES to treat the goal records as if only the first five occurrences of the GUEST element exist.

When you use explicit SET FILTER commands on the sort elements to create a set, you should clear those filters when you process the set. (Note that SPIRES will be forgiving if you leave those filters set; that is, the output will be the same whether they are set or not. However, if you were to set other filters on the sort elements when processing the set, the results might be unpredictable. Hence, it is recommended that you not set any filters on the sort elements when you are processing the set under FOR SET.)

Information about filters with direct sets appears in the next section. [See 1.8.] General information about element filters appears later in this manual. [See 21.] The path information may be applied selectively in custom-designed SPIRES formats, if desired. See the manual "SPIRES Formats" for details on the PATH and NPATH options.

1.8  Direct Sets

Under FOR SET processing, SPIRES extracts from each sort entry a pointer, used to retrieve a goal record for processing. A special type of set, called a "direct set", can be created that already contains the goal record data desired for FOR SET processing -- that is, SPIRES does not need to retrieve each goal record for processing, because the desired data for display is already in the set.

Processing a direct set under FOR SET is more efficient than processing a non-direct set. Direct sets are a benefit primarily to large sorting applications, where thousands of sort entries are sorted and processed; and to sorting applications where the same set is sorted multiple times for multiple reports.

The subsections of this section will discuss direct sets in detail. The first subsection will cover the additional options on the DEFINE SET command that are used to create direct sets. [See 1.8.1.] The next will discuss the processing of direct sets, using the DIRECT option on the FOR SET command. [See 1.8.2.] Following that will be a discussion on using filters with direct sets. [See 1.8.3.]

1.8.1  Direct Sets and the DEFINE SET Command

Direct sets are created by adding the DIRECT option to the DEFINE SET command, and used by adding the DIRECT option to the FOR SET command. [See 1.8.2.] Here again is the syntax of the DEFINE SET command:

  DEFINE SET setname [REPLACE] [DIRECT [EXTERNAL] [SCAN] [ALL]] [TV=ALL]...
   ... ELEMENTS [=] element-list [+ direct-list] [- direct-list]

REPLACE, "setname", TV=ALL, and "element-list" were discussed in the section on the DEFINE SET command. [See 1.3.1.]

The DIRECT option tells SPIRES that the set should be a direct set. The other options relating to direct sets, which are discussed in detail below, are:

Determining Which Elements Go Into the Direct Set

All the elements in "element-list" -- that is, the sort elements -- will be included in the direct set, so in general, there is no reason to list them also in the "direct-list" (but see the notes below). The other options, ALL and the "direct-lists", let you add other elements from the goal records to the direct set, in a manner similar to the SET ELEMENTS command. For example, if DIRECT ALL is specified with no "direct-lists", then all the data in the record that passes through the filtering in effect when the GENERATE SET command is issued will go into the sort entries generated for the direct set.

Important: DIRECT ALL does not mean that all occurrences of a multiply occurring sort element will go into the set -- only the occurrence of the element that is generating that entry will. (The others are being filtered out, and hence are not included in the sort entry.) If you will need to display all occurrences of a sort element, rather than just the occurrence for that sort entry, do not use direct sets.

The "direct-list" options can be used to add or subtract elements for storage in the direct set. In general, if you use the ALL option, you might subtract elements you did not need to be included in the set; if you do not use the ALL option, you might need to add elements to the direct set. You can, however, add or subtract elements as desired. For instance, you can subtract an entire structure from ALL, and then add back a single element within it:

All the elements in the goal record except structure D but including element E in structure D will go into set X.

Any elements defined in the file definition for the goal record can be named as sort elements or "direct elements"; additionally, elements within phantom structures may be named in a DEFINE SET command for a direct set.

Dynamic elements, including elements from dynamically-defined phantom structures, may also be included. SPIRES will figure out the value for a dynamic value at the time the set is generated, storing that value in the set. Later, under "FOR SET name DIRECT", SPIRES will use the stored value as the element value. [See 1.8.2.] In fact, you don't need to redefine the element when you use the "FOR SET name DIRECT" command; if you do, SPIRES will ignore it (and any other dynamic element definitions) till after the next ENDFOR. (If you SHOW DYNAMIC ELEMENTS under "FOR SET name DIRECT", SPIRES will show it simply as "DEFINE ELEMENT name", with no further information, which indicates that definition is irrelevant under this form of FOR SET processing.)

The SCAN Option: Controlling Which Sort Entries are Created

The SCAN option lets you apply WHERE clause processing as the sort entries are created for the direct set. SCAN causes the GENERATE SET command to apply the current WHERE clause criteria to each sort entry to see if the entry should be included in the direct set.

Here is a very simple example to illustrate the effect. Here is a sample record:

And here are commands to select records and create the set, first without the SCAN option:

The WHERE clause was applied to choose records to go in the direct set, but the set has entries for combinations besides the "apple/cat" one.

By adding the SCAN option to the DEFINE SET command, the WHERE clause will be applied as the set entries are created:

Note that the same effect could have been achieved by adding a WHERE clause to the FOR SET command in the first example (FOR SET APPLES WHERE A = APPLE AND B = CAT). The difference is that in the second example, only one entry was created at all for the direct set. The WHERE clause processing happened when the set was created (GENERATE SET), not when the set was used (FOR SET).

The SCAN option must immediately follow the word DIRECT in the DEFINE SET command.

External and Internal Forms of Sort and Direct Elements

When generating a set, SPIRES by default places the internal form of both sort elements and direct elements into the set. For sort elements, however, you may want SPIRES to sequence on the external form, in which case you include the "X" parameter in the sort options for the element:

Similarly, you can request that a direct element be put in the set in its external form:

This technique is primarily useful when you will be processing the direct set multiple times -- the element does not have to be processed through its OUTPROC rules for each report. That can save a great deal of processing, particularly when the OUTPROC rules are complex, e.g., requiring other records to be fetched for table lookups (the $LOOKUP and $SUBF.LOOKUP procs).

You can request that all the elements in the "direct-list" and in the "element-list" be saved in the set in their external forms for output by adding the EXTERNAL option to the DEFINE SET command, following the DIRECT option. Important: That does not affect what form is stored for each element in the "element-list" for sorting purposes. In other words, if EXTERNAL is specified for the direct set and all the elements in the "element-list" for sorting are to be sorted in their internal form, then SPIRES will place both the internal and external form of each in the set.

You can override the EXTERNAL option for elements in the "direct-list" on an individual basis by adding the "(I)" option after its name in the list. For example:

-> define set sortaray direct external elem type + name contact(i)

In that case, SPIRES will generate sort entries where the TYPE element is stored in its internal form for sorting, and the TYPE element is stored in its external form for displays. Also, the NAME element will be stored in its external form; and the CONTACT element will be stored in its internal form. That might be useful in a table-lookup situation where you want the lookup to be done at the time of the display under FOR SET, not at the time the set is generated. (See below.)

Note these implications of storing the external form of a direct element in a set -- the direct element behaves as if it had no processing rules at all, which means:

Be aware that elements defined as "OUTPROC-required" by the file definition can be placed in the set only in their external form, and the above implications will apply to them too.

You can request that a sort element be sorted by its internal form but be stored as a direct element in its external form by explicitly adding it to the direct element list, e.g.,

Conversely, you can request that the sort element be sorted by its external form but be stored as a direct element in its internal form in one of the following ways:

The "I" option indicates that the element should be carried as a direct element in its internal form.

You can specify that SPIRES not include a sort element in the collection of direct elements by using the "- direct-list" option, as in this example:

The COLOR element would be used as a sort element but would not be stored as a direct element, and thus would not be accessible under direct set processing.

Virtual Elements, Hidden Elements

Virtual elements can be specified as either sort or direct elements. They become "real" elements when the set is created, in that their current value (at the time the set is created) is stored in the set. Either the internal or external form may be specified for the set. The external form for direct set storage is determined by executing the virtual element's OUTPROC rules; the internal form is determined by executing its OUTPROC rules and then its INPROCs.

You cannot use sets to circumvent security provisions. For example, elements whose values are hidden from your account may not be placed into a direct set. SPIRES may not display an error message when you issue the DEFINE SET command, but the GENERATE SET command will certainly not place the hidden data into the set.

Direct Sets are Larger

The GENERATE SET command works the same for direct sets as for non-direct sets, except that the generated set will be larger. [See 1.3.2.] There is an absolute limit of 5500 bytes per sort entry, but as sort entries become very large, they cause inefficient processing. Hence, do not include non-sort elements in a direct set just to carry them along; be sure you need them in your final product.

The special form in which SPIRES stores direct data in the set requires eight extra bytes of overhead for each element or structure. It may thus be more efficient to place entire structure occurrences in the set (eight bytes for each occurrence of the entire structure) than a few individual elements from the structure (eight bytes for each element value). Of course, for overall storage savings, it is best to erase sets as soon as you no longer need them, since they are duplicating data already stored. [See 1.2.]

1.8.2  Direct Sets and the FOR SET Command

Direct sets are processed like other sets -- under the FOR SET command. However, you must use the DIRECT option to request that the set be processed as a direct set, rather than just as a regular set.

Here again is the syntax of the FOR SET command:

Only the DIRECT option is discussed here. [See 1.5.]

Even if a set is a direct set, it can be processed as if it were a regular set, by omitting the DIRECT option. However, if the DIRECT option is specified, SPIRES will not retrieve the goal record that a sort entry points to, but will instead use the data within the sort entry for any record processing under FOR SET.

Only the following Global FOR record processing commands may be used to process "direct records": DISPLAY, SHOW KEYS and SKIP. Most Global FOR commands will not work when a direct set is being processed, such as TRANSFER, REMOVE, MERGE, DEQUEUE, UNQUEUE and REFERENCE. Note that the STACK command does work, though any stack created will retrieve goal records, not direct records.

You should not create a direct set unless all of the data you will need for reporting is included in the lists of sort elements or direct elements. Otherwise, you will be unable to use the DIRECT option, since you will need other data in the goal records. It is easy, for example, to forget elements that are retrieved by a report format with a $GETxVAL function or by the $GETELEM (A79) processing rule.

Virtual elements are handled in an interesting way in direct sets. They become "real" elements when the set is created, in that their current value (at the time the set is created) is stored in the set. Either the internal or external form may be specified for the set. If you display a direct set record in the SPIRES standard format, any virtual elements in it will be displayed too, without you having to set them explicitly in a SET ELEMENTS command.

Any direct element stored in its internal form is converted to external form by processing it through its OUTPROC rule string. This is true even for virtual elements -- the internal form (which was created under GENERATE SET by executing the element's OUTPROC rules and then its INPROC rules) is run through the OUTPROC rules again to get the external form.

Since you are allowed to issue the GENERATE SET command during direct set processing, you can create a direct set from another direct set. This technique can be considerably more efficient than generating multiple sets from the goal records for several different reports. If you need to sort the goal records several times on different elements, consider creating a direct set for the first one and creating other direct sets from it for the others.

There is no guarantee that using direct sets will save you money. If set entries are very large because you have many direct elements, the overhead to process them could offset the savings from not processing the goal records. If you are concerned about costs and your set will have many direct elements and/or sort elements, you should certainly run some tests using real data before deciding to use direct sets.

1.8.3  Direct Sets and Filters

If you have explicit filters set when you begin processing a direct set, SPIRES will clear those filters:

SPIRES clears the explicit filters because when you start processing a direct set, you are in effect working with an entirely different type of record than when you were working with the subfile's goal records, i.e., when the explicit filters were set. Such explicit filters for the goal records could cause problems if they were applied to the direct set. For example, such filters would not work properly with direct elements stored in their external forms.

Although pre-existing explicit filters are cleared when the direct set processing begins, you can set other filters (with the SET FILTER command) after issuing the "FOR SET setname DIRECT" command. Thus, you are allowed to continue filtering the direct set, but only after direct set processing has been initiated. As soon as an ENDFOR condition occurs, however, these filters are discarded, and the explicit filters that were automatically cleared are reestablished by SPIRES, as shown in the example above.

1.9  Display Sets

Display sets are a variation of direct sets. [See 1.8.] Display sets are created with the DEFINE SET and GENERATE SET commands, but unlike direct sets, display sets are not placed in ORVYL data sets. Instead, you specify an output area in which the display set should be placed (e.g. your terminal screen or your active file). The GENERATE SET command sends the set directly to the output area, so you do not need to use the FOR SET command to use the set.

Since display sets do not produce ORVYL data sets, you can't use SPISORT to sort the records in the set. But you may be able to take advantage of the automatic sorting in your record keys (using FOR SUBFILE) or in an index (using FOR INDEX) to control the order in which entries in the set appear. Or, since you can generate a set from a record stack, you could use the SEQUENCE command to arrange records in a particular order before you use the GENERATE SET command.

An advantage of this difference between display sets and normal sets is that display sets do not have the 5500-byte size limit that normal sets have.

Display sets are created by adding a DISPLAY option to the DEFINE SET command. Here is the syntax for display sets:

The ALL, TV=ALL, EXTERNAL, and "element-list" options were discussed earlier. [See 1.3.1.] The "display-list" options are similar to the "direct-list" options for direct sets. [See 1.8.1.] They let you add or subtract elements for inclusion in the display set. Note that you do not supply a setname for a display set.

The SCAN option works the same as for direct sets. It lets you apply WHERE clause processing as the sort entries are created for the display set. SCAN causes the GENERATE SET command to apply the current WHERE clause criteria to each sort entry to see if the entry should be included in the display set. [See 1.8.1 for an example.] The SCAN option must immediately follow the words DISPLAY SET.

Use the GENERATE SET command to send the display set to an output area:

If you omit the "IN areaname" prefix, the generated display set will be displayed at your terminal. The second option indicates which records in the Global FOR class are to be processed for the display set. The default is ALL records in the class.

Here is an example showing how to create a display set. Suppose you have a subfile whose goal records consist of a supply item, with structures for each order made for the item. The structures naturally occur in chronological order. You can create a set with an entry for each order this way:

2  An Alternate Form of the Subfile Name

Since file owners have almost complete freedom in choosing subfile names, the possibility always exists that you may have access to two or more subfiles with the same name. If you SELECT one of them, SPIRES will ask you which one you meant, showing you the file names of the subfiles to distinguish between them. Alternatively, when you SELECT one of the subfiles, you can specify all or part of the file name before the subfile name, in order to uniquely identify the desired subfile. This alternate syntax for the subfile name is also available:

The syntax for the subfile name in these commands is:

where the bracketed material is the name of the file as it appears in the first line of the file definition. You may also include a comma after the "filename" if it helps clarify the command syntax for you.

Starting from the left end, only as much of the file name as is needed to distinguish the subfile name from any other is necessary. Thus, for a file named "GA.JNK.MUSIC", you could specify "&GA.JNK.MUSIC" or as little as "&G". The "&" (ampersand) character is required if any part of the file name will be used, since it tells SPIRES that what follows immediately is a file name rather than a subfile name, which is expected.

Here are examples using the file name "GA.JNK.MUSIC" and a "RECORDS" subfile:

Though it is not recommended, an even shorter method is available. In some cases, if the file being named is your own, you can replace your account number with an asterisk (*) or a period (.), as in "&*MUSIC" or even "&.". This technique is not recommended when you are writing code that will be executed by other users, since SPIRES may try to substitute their account numbers instead of yours.

3  Sharing Records in SPIRES System Subfiles with Other Users

All of the SPIRES system subfiles, such as FILEDEF and FORMATS, control access to their records by means of the record key, which begins with the account number of the user owning the record. By default, only you, the owner, can display and update your records in those subfiles.

Using a system subfile called METACCT ("met-account"), however, you can allow other users to display and possibly update your records in almost all system subfiles. Hence, you can share your system-subfile records with other users, letting them examine or copy a file definition, for example, or add their own procs to an EXTDEF record of yours. You simply add a record to the METACCT subfile that tells SPIRES what users have what access ("see-only" or "update") to your records in which subfiles. The only exception is METACCT itself, which is not affected by METACCT. A complete list of the affected subfiles appears in the next section.

To handle your records, users must first issue the SET METACCT command, naming the account whose records they want to see or use (yours). Their subsequent commands referring to your records for display and update or even compiling purposes will succeed or fail depending on the level of access you gave them in your METACCT record.

The next section of this chapter describes in detail the METACCT record that you the owner must create in order to give users access to your records. [See 3.1.] The section following that talks about this feature from your user's standpoint, describing the SET METACCT command in particular. [See 3.2.]

Having several people updating your system-subfile records can lead to confusion regarding the "current" version of a record. For example, suppose a user gets a copy of a file definition and makes changes to it; meanwhile, you do the same, updating the record before the other user does. When the other user's update occurs, your changes will be discarded.

To help avoid that type of problem, records in system subfiles may contain a special structure, called VERSION-STR. If used, this structure would in effect block the other user from updating the record until he or she takes your update into account. Details on VERSION-STR appear later in this chapter. [See 3.3.]

3.1  The METACCT Subfile

To give other users access to your records in one or more system subfiles, you must create a record stored under your account in the METACCT subfile. In addition, unless the access is limited to See-Only, the users can add new records to those subfiles for your account, make changes to records and update them, compile them, or even remove them. In a sense, by putting together a METACCT record, you are allowing some user or users to be you, at least in regard to some system subfiles (possibly limited to certain records; see KEY-PREFIX below). [See 3.2 for specific details on what users with METACCT access can do.]

The basic structure of the METACCT goal record-type is this:

Since you may have only a single record in the subfile, all special access to your records must be defined in this record. To that end, ACCOUNTS with SUBFILE may repeat under ACCESS, and ACCESS may repeat as well, forming multiple access-structures (see example below).

In case of contradictory statements in regard to a specific account or subfile, be aware that the least restrictive interpretation will always be chosen by SPIRES. For instance, if you give another account the ability to update records in one or all files, no other statements in the record can override that. Examples below will make this point clear.

Here are some other important details about ACCESS, ACCOUNTS and SUBFILE:

The only significant omission from the list of system subfiles above is METACCT itself. You cannot give other users access to your own METACCT record; only you can add it or change it.

If you give Update access to the FORCHAR subfile, then you are permitting the other user to remove any compiled formats for any of your files, even if the format's id belongs to neither you nor the other user but to a third user.

Similarly, if you give a user Update access to the COMP PROTO subfile, that user can remove any compiled protocols in any of your protocol files, even if others defined the SYS PROTO records for those protocols.

The METACCT record takes effect as soon as you add it to the METACCT subfile.

Here are some sample METACCT records:

That record gives all accounts in group AB access to user AB.USE's records in all the system subfiles listed above.

The next example is more complicated:

In this example, account AM.UCK gets Update access to all the system subfiles listed above, including FORMATS. The See-Only limitation for group AM users for the FORMATS subfile doesn't apply to AM.UCK, because the specific account reference to AM.UCK under "ACCESS = Update" is less restrictive than the specific subfile reference for the AM group.

Once the barn door is opened and an access-structure gives Update access to an account for some or all the subfiles, that access will not be revoked by any other access-structure. Below is a bad example, in that the record owner is trying to allow a particular user to update everything but the FORMATS subfile; it will not achieve the desired aim:

Since the first access-structure has already given Update access to all subfiles for account BY.TES, the second structure does not override it; hence, BY.TES has Update access to the records in the FORMATS subfile belonging to GA.TES.

If it is really necessary to give an account Update access to all but one or two subfiles to which it should have See-Only access, you must type the name of each of the subfiles for Update access. In practical usage, however, this seems to be an infrequent need. In most cases, you should be able to state the requirements fairly simply.

You could easily concoct more complicated examples, but generally speaking, practical uses are relatively simple, more like the first two examples than the third.

If SPIRES's interpretation of that third example record still strikes you as odd, consider how you would interpret the same record if the See-Only account for the FORMATS subfile were listed as BY....(i.e., group BY) rather than the specific BY.TES. Is the record owner trying to allow group BY to have See access to FORMATS and in addition allow account BY.TES to have Update access to all the subfiles? Or does the record owner really mean to prevent BY.TES from updating those FORMATS records? The former interpretation seems more plausible, which is the way SPIRES would interpret that METACCT record.

3.2  Using METACCT Privileges: The SET METACCT Command

This section describes the privileges that METACCT access provides to you as a user of someone else's records, as well as the procedure you must follow in order to use them.

Generally speaking, you find out that you have been given METACCT access to someone's system-subfile records because he or she tells you about it. Aside from trial and error, you have no way of finding out whether you have access to other people's system-subfile records.

If you have been granted access to someone else's records, you can begin using them by issuing the SET METACCT command in SPIRES:

where "gg.uuu" is the other person's account number, i.e., the account of the user whose records are to be retrieved or updated or compiled. By including multiple accounts in the command, you can request METACCT access to the records of several accounts at one time.

SET METACCT is a session command, remaining in effect for the duration of the SPIRES session, or until it is cancelled.

CLEAR METACCT cancels the current METACCT access. Issuing another SET METACCT command also cancels the current METACCT access, and then establishes access for the new list of accounts. SHOW METACCT shows you a list of the accounts currently set, or displays the message: "No accounts defined." (Syntax note: You can also spell out METACCOUNT in these commands if you desire.)

What Access Does METACCT Provide?

Generally, Update access to a system subfile through METACCT means that SPIRES treats you as if you were using both your account and the other user's when you have the subfile selected. Thus, you can see their records with the DISPLAY command, make changes to their records using the MERGE command or the TRANSFER/UPDATE sequence, or discard their records using the REMOVE command. You can even add new records for their account.

If you have been given access to compilable records, such as file, record, vgroup or format definitions, you can compile or recompile those records as well. Note, however, that for file definitions, you must have WRITE access (via ORVYL permits) to the owner's account. Additionally, if the compilation creates new ORVYL data sets, they will not have the normal PUBLIC access permits that are set when files are compiled or recompiled on the owning account. These will need to be set appropriately on the owning account. See section B.12.1 of the manual "SPIRES File Definition"; online, [EXPLAIN ORVYL FILES, PERMITS FOR IMMEDIATE INDEXING.]

See-Only access is a subset of the Update privileges. As its name implies, See-Only access allows you only to see the other person's records (for example, with the DISPLAY or TYPE commands), but not to add new ones, change existing ones or remove old ones.

As described in the previous section, [See 3.1.] the owner may grant this access to his/her records in one, some or all of the following system subfiles:

Other commands that work with source records will probably work through METACCT access. A good example is PERFORM FILEDEF SUMMARY, which summarizes the file definition for the selected subfile. If you have at least See-Only access to FILEDEF for someone else's file definitions, you can issue this command when you have one of their subfiles selected. [See 28.4.] PERFORM PRINT is also allowed when appropriate METACCT access is in effect. [See 28.1.]

3.3  Using VERSION-STR to Track Copies of System File Records

Several system subfiles in SPIRES (e.g., FILEDEF, FORMATS) contain a structure called VERSION-STR. This structure can help you keep track of what "version" (i.e., which copy) of a system-subfile record you are working with. Because VERSION-STR is designed to solve a particular problem, it is necessary to understand the problem in order to understand how VERSION-STR works.

Suppose you and a co-worker are both working on an application. Via SET METACCT [See 3.2.] you transfer the application's file definition from FILEDEF and begin to make some changes to it. Your pal needs to make some changes to it too, so independently of you, she transfers the file definition, makes her changes, and updates it. Then you finish your work and update the record, losing her changes.

The data maintained by the VERSION-STR structure can prevent this problem. When you transfer your copy of the record to work with, the VERSION-STR structure contains a version number, such as "12". Similarly, your co-worker, getting a copy shortly thereafter, would also get version 12 of the record. You both make your changes, leaving the VERSION-STR untouched. When she issues the UPDATE command, SPIRES compares the version number of the stored record with the version number in the input; since they are the same, the record goes back in without an error. At that time, SPIRES assigns the record the next version number, which here is 13.

When you issue the UPDATE command, the command fails, because the version number of the stored record, 13, doesn't match the version number in the input data, 12. Hence you are warned that the record has been updated since you retrieved your copy of it. Presumably, you'll get a copy of the record in its latest form, and merge your changes in with that one. [You could also simply choose to change the version number in your copy of the record to 13 and issue the UPDATE command again. The VERSION-STR feature serves only as a warning; it cannot absolutely prevent records from being updated inappropriately.]

To request that version information be maintained for a system-subfile record, you simply add the VERSION-STR structure to the record:

Because VERSION-NUMBER is the key of the VERSION-STR structure, you do not need to include the VERSION-STR statement.

For VERSION-NUMBER, "n" must be a positive integer no greater than 999999; it's most often set to "1". (After 999999, VERSION-NUMBER wraps around to 1 again.)

If you omit VERSION-ACCT, SPIRES will provide it with your account number, i.e., the account of the logged-on user. SPIRES does not verify the account value, so it will accept any value you type, even if it is an invalid account number or form; similarly, if you use an account abbreviation, such as ".", which is recognized as meaning "your account" in most other SPIRES contexts, it will be treated as the typed character and not translated into the account number here. SPIRES will, however, display the account number to you in lowercase, for a reason explained below.

The VERSION-ACCT is significant because SPIRES will ignore the VERSION-STR if the value for VERSION-ACCT doesn't match the account in the key of the record. The example described below will explain why that is useful.

The VERSION-STR structure is available in these system subfiles:

Practical Uses of VERSION-STR

If you are the only one who updates your system-subfile records, VERSION-STR will be of limited use. You could use it to keep track of copies of a record by their version numbers, just as you might use the MODDATE and MODTIME elements, e.g., to verify that the paper copy you have is the latest version.

When several users may be updating your system-subfile records, VERSION-STR is more useful, as described in the earlier example. Here, in more detail, is how you might use it.

Suppose that one of your applications keeps its production code under account GQ.PRD, and its test code under GQ.TES. Within each GQ.PRD record, you could add the VERSION-STR structure, like this:

On account GQ.PRD, put the record back into the system subfile (FILEDEF, for this example). SPIRES will immediately change the record's version number to "2" internally.

Sometime later, you get a copy of file definition GQ.PRD.ALMANAC, moving it under your test account, GQ.TES. With the WYLBUR command CHANGE, you change "GQ.PRD" to "GQ.TES". (Since the VERSION-ACCT value is in lowercase, it remains "gq.prd".) You then make other changes you want to make, adding the record to FILEDEF as GQ.TES.ALMANAC. Because the VERSION-ACCT doesn't match the account in the key of the record, the VERSION-STR data is ignored, and isn't updated.

Eventually you are ready to replace the GQ.PRD.ALMANAC file definition with the new version under GQ.TES. Again you issue the CHANGE command to change occurrences of "GQ.TES" to "GQ.PRD" (again leaving the VERSION-ACCT value untouched), and then update the file definition with the new copy.

Because the VERSION-ACCT value now matches the account in the record key, SPIRES will pay attention to the structure. If the input version number ("2") matches the version number in the stored record, then the update can continue. If it doesn't match, then the record has been updated since the time you made the copy of it; SPIRES will reject the new copy, meaning you need to resolve the discrepancies between them.

Technical Notes on VERSION-STR

VERSION-STR and the MERGE command

WARNING: If you use the MERGE command to update a system-subfile record (rarely done under any circumstances, we hope) and the stored record contains the VERSION-STR structure, be sure to include the VERSION-STR in your input as well. If you don't, the VERSION-NUMBER will not get checked/updated appropriately.

The warning also applies to updates done through partial processing; be sure to open and close the VERSION-STR structure during your processing.

Getting rid of the VERSION-STR

Generally speaking, the only way to eliminate the VERSION-STR structure once you have begun using it is to get a copy of the record, remove the record from the subfile, delete the VERSION-STR structure from the copy, and then add the record back into the subfile again.

VERSION-STR and Secure-Switch 4

Users of a system subfile with secure-switch 4 set (only people involved with SPIRES system administration) may also eliminate the VERSION-STR structure of records directly, simply by transferring them, deleting the VERSION-STR lines, and updating them.

4  Object Code Management: The ZAP Subfile-name Commands

In SPIRES, the following kinds of data are or can be compiled:

Whenever one of these kinds of data is compiled, SPIRES stores the compiled object code. For file definitions, the compiled characteristics are stored in an ORVYL file, on the file-owner's account, called "filename.MSTR". For all other object code, SPIRES creates a record in another SPIRES system file. In addition, when a user issues a STORE STATIC command, SPIRES creates a record in another SPIRES system file.

To allow you to manage (i.e., remove) records created in these other SPIRES system files by the COMPILE and STORE STATIC commands, a set of ZAP commands is available in SPIRES to delete these records, and optionally to delete the source records from which they were derived.

The general form of these commands is:

where "source-subfile" is: FORMATS, RECDEF, SYS PROTO or VGROUPS. (Note: the source-subfile names can be abbreviated to three or more characters.) The "source-record-key" is the key of the record given in the COMPILE command; as in these commands, the key need not be fully qualified by the user's account number, since the user can only ZAP object-code records defined by the logged-on account. A ZAP STATIC command is available to remove data records for stored variables.

To use any of these ZAP commands, the source-record must be stored in the appropriate SPIRES system file. That is, to ZAP the compiled code for a format, the definition for that format must be in the public FORMATS subfile. If the source record is not available, then more manual methods of object-code management must be used. Contact the SPIRES consultant for more information.

The following sections describe most of the various ZAP commands available in more detail.

4.1  The ZAP FORMAT Command

To remove object code generated when a format is compiled, the following ZAP command is available in SPIRES:

The source-record-key is the value of the ID statement in the format, which is the key of the FORMATS subfile record. The source-record-key need not be prefixed by the logged-on user's account number. If the SOURCE option is used, then the source record is removed from the FORMATS subfile.

For example:

You can only zap the object-code of a format when you own the format. Whether or not you own the file to which the format applies is immaterial. The file owner can therefore only zap the formats he created for the file.

4.2  The ZAP RECDEF Command

To remove object code generated when a record definition is compiled from the RECDEF subfile, the following ZAP command is available in SPIRES:

The source-record-key is the value of the ID statement in the record definition, which is the key of the RECDEF subfile record. The source-record-key need not be prefixed by the logged-on user's account number. If the SOURCE option is used, then the source record is removed from the RECDEF subfile.

For example:

4.3  The ZAP SYS PROTO Command

To remove object code generated when a protocol is compiled using the older SYS PROTO method of compilation, the following ZAP command is available in SPIRES:

The control-record-key is the value of the ID statement in the protocol's compiler-control definition, which is the key of the SYS PROTO subfile record. The control-record-key need not be prefixed by the logged-on user's account number. If the SOURCE option is used, then the control record is removed from the SYS PROTO subfile.

For example:

If the protocol is compiled directly from the source subfile (the newer method of compiling protocols), you should use the ZAP PROTOCOL command:

4.4  The ZAP VGROUP Command

To remove object code generated when a variable group definition is compiled from the VGROUP subfile, the following ZAP command is available in SPIRES:

The source-record-key is the value of the VGROUP statement in the variable-group definition, which is the key of the VGROUPS subfile record. The source-record-key need not be prefixed by the logged-on user's account number. If the SOURCE option is used, then the source record is removed from the VGROUPS subfile.

For example:

4.5  The ZAP STATIC Command

This command removes a particular set of stored static variables created by the STORE STATIC command, or it can remove all sets of stored static variables for a particular variable group.

Three separate command forms are allowed:

The first form removes a particular stored set of static variables; "storage-record-name" is the name given to the stored vgroup when the "STORE STATIC vgroup-name TO storage-record-name" command is issued. The second removes all stored static groups belonging to you for a particular vgroup. The third form eliminates all stored static groups belonging to you for any vgroups.

The ZAP STATIC commands will remove stored sets of static variables only if they belong to you, regardless of whose vgroup they apply to.

5  Multiply Defined Elements

You can have great flexibility with files containing multiply defined element mnemonics -- that is, two or more elements with the same name occurring in a file definition. For example, the element COMMENTS might occur in several different structures in a file definition. Also, "floating structures" [See "SPIRES File Definition", section B.3.6.] cause multiply defined element mnemonics.

The SET ELEMENTS, TYPE, ALSO and SEQUENCE commands allow specification of elements either by a simple element mnemonic or by a "structure@...@element" form (see examples below) which specifies a structural path to the desired element. Note that this second form is needed only if the element mnemonic is not unique in the file definition, i.e., the same name is used for several different elements.

If you use any of these commands with a simple element mnemonic when that element is multiply defined, not all of the occurrences of that element throughout a record are examined or displayed. If we say that record-level elements are at the highest level, and that elements in a record-level structure are at the next lower level, and that the innermost-nested structure elements are at the lowest level, we can say that the simple mnemonic option will cause SPIRES to locate the first appearance of the mnemonic at the highest level it is found in the file. However, using the "structural path" option will locate any specific appearance of the mnemonic.

The SET ELEMENTS and TYPE commands also allow the "@element" form, which means "all elements having the specified mnemonic". Again, this form is only needed if the same element mnemonic occurs more than once in the file definition.

The effects of these features are best explained with examples. You will note that different aliases for the same mnemonic can be used for clarity. The following is a skeletal part of a file definition, followed by three records from the file:

     1.  RECORD-NAME = ID;
     2.  SLOT;
     3.  OPTIONAL;
     4.     ELEM = X;
     5.        ALIAS = XR;
     6.     ELEM = S;
     7.        TYPE = STR;
     8.  STRUCTURE = S;
     9.     REQUIRED;
    10.        ELEM = X;
    11.           ALIAS = XS;

Note that element X appears twice in the definition -- once at the record level (alias "XR") and once within a structure (alias "XS").

The SEQUENCE command

The command "SEQUENCE X" would arrange these records in the order 2,1,3. SPIRES examines values only at the highest level in all records where it occurs in any record -- in this case, the record level. Since null values are listed first, record 2, with no value for X at the record level, comes first, followed by records 1 (X = Afford;) and 3 (X = Change;). The command "SEQUENCE XR" would give the same sequence.

However, you can type "SEQUENCE S@X" and the arrangement becomes 3,2,1. SPIRES looks for occurrences of X specifically in the structure S. Thus, record 3 with no occurrences of the element at that level comes first, then records 2 (X = Before;) and 1 (X = Delete;). "SEQUENCE XS" would also have the same result.

The ALSO Command

If you have a search result containing these three records, and you issue the command "ALSO X STRING FOR", only record 1 will be retained. SPIRES examines only the occurrences of the X element at the highest level the element can appear -- the record level, in this case. Thus, only the "XR" occurrences are examined. ("ALSO XS STRING FOR" would give the same result.)

"ALSO S@X STRING FOR" retains only record 2. ("ALSO XS STRING FOR" would also give the same result.)

The TYPE and SET ELEMENTS Commands

Both of these commands are affected similarly to the ALSO command, though they both have an additional feature, discussed below. For example, "SET ELEMENT X" will cause commands that display the records to display only the "XR" values -- that is, occurrences of the first element X found at the highest possible level, which in this case is the record level.

"SET ELEMENT S@X" will set the element with the "XS" alias -- that is, all occurrences of the element X within the "S" structure.

"SET ELEMENTS @X" is the additional feature: all occurrences of the element X at any level in a record will be "set".

The values for the element list on a TYPE command can also use the "structure@...@element" or "@element" options described here, or they can use neither option, with the same results as described above for the SET ELEMENTS command.

Remember that this information is only relevant to files with multiply defined element mnemonics, which are relatively rare.

5.1  The Throw-away Element: The "-" Element in SPIRES Subfiles

Each SPIRES subfile has a special element, the "-" (hyphen) element, that can be used during data entry for comments. When a record containing occurrences of this element is added to the subfile, the values for the "-" element are thrown away; they are not stored within the record. But why would anyone want to type comments that are thrown away on input?

Some SPIRES users find that they update the same large record or records over and over again. Each time they update the record, however, they may not remember why a particular element or occurrence of an element has a particular value. What these users do is to insert these "throw-away" comments throughout their input data and then save the input data separately from the subfile before adding the record to the subfile. The extra saved copy becomes the master copy of the record; when the record needs to be updated, the master is changed, saved again, and then used to replace the record in the subfile. When the record is added or updated in the subfile, these extraneous comments that are saved in the master copy somewhere else are not stored in the subfile copy.

Here is how some input using this element might look:

As you can see, the "-" element can appear anywhere and any number of times in the input data, when you are using the standard SPIRES format. Of course, the value must end with a semicolon.

6  Storing DECLARE Data for Multipurpose Use

Declare data, the commands and metadata that define declared elements and output control packets, is normally found in protocols. However, in situations where you'd like to use the same declare data either in multiple protocols or repeatedly in command mode, you can store the declare data in a separate subfile and execute the declaration from there. The subfile is possibly one of your own making, but system files are also available in which you may store your declare data.

The first section of this chapter describes how to set up a subfile for declare data. [See 6.1.] The following one describes how to use your stored declare data. [See 6.2.]

6.1  Declare Data Subfiles

You can either create your own subfiles where you can store declare data or else, if your declare data is for declared elements or for output control packets, use two system subfiles.

The DATA MOVE DECLARES and DATA OUTPUT CONTROL Subfiles

Two system subfiles already exist for declare data; anyone can put declare data into them:

Additionally, anyone can refer to those records, although only the record owner, plus anyone to whom the record owner gives METACCT access, can update or remove them. If privacy of the declare data itself is an issue for you, you should create your own subfile(s) to hold it, as described below.

For each declared element or each output control package you want to store in one of the above subfiles, you just add an ID statement at the top and add it into the appropriate subfile:

The next section describes the commands you issue to use these records when you need them. [See 6.2.]

Creating Your Own Subfiles for Declare Data

You can create your own subfiles to hold your declare data, by defining the subfile's goal record-type with a DEFINED-BY statement that names a SPIRES system-supplied definition:

As in the DATA MOVE DECLARES and DATA OUTPUT CONTROL records, the records you add to the subfiles need to have an ID key supplied, which you will use in the WITH DECLARE prefix of the commands you issue to use the stored declare data.

For more information about the specific form for all of these different records, see the documentation for each type, e.g., declared elements or output control. [See 7.1, 20.3.]

6.2  Using Your Stored Declared Data

To use stored declared data, you need to:

Here are the steps in detail:

1) Through a path, select the subfile where it is stored

Of course, this implies that you have first selected a primary subfile to use.

In this example, data defining a declared element is stored in the DATA MOVE DECLARES subfile:

2) Point SPIRES to that path

Use the SET DECLARE PATH command:

where:

3) Issue the desired DECLARE command using the WITH DECLARE prefix

The WITH DECLARE prefix, used on the DECLARE command, provides the key of the record holding the stored declare data:

Hence, the "key-value" is the key of a record within the named path for the particular type of declare data.

The rest of the command is the normal DECLARE command, like DECLARE ELEMENT or DECLARE OUTPUT CONTROL or DECLARE TABLE... You do not use the ENDDECLARE command, as you do when using the DECLARE command without the WITH DECLARE prefix, in a protocol.

In the example, a system-defined declared element definition called $NAME.MIDDLE, is used to redefine the NAME element of the selected subfile. This specific technique could be used in any SPIRES subfile for any element that uses standard name-handling processing rules if you wanted to return just the middle parts of a name. There are also $NAME.FIRST and $NAME.LAST declared element definitions, among others, that could be used as well. See the keys in the DATA MOVE DECLARES subfile for a complete list of declared element definitions. See the DATA OUTPUT CONTROL subfile and the TABLES subfile for corresponding types of declared definitions.

7  Output Control

A SPIRES feature called output control lets you produce multiple reports or output processes while examining a set of subfile goal records just one time. By running multiple reports simultaneously rather than in sequence, you can save significant amounts of I/O and hence, in many cases, run time.

Additionally, output control's capabilities provide new options for creating output. For example, you can create output from multiple formats with the data going into a single device area. You may also have different filtering criteria set for the different reports or output processes.

Output control is established through a DECLARE command, DECLARE OUTPUT CONTROL. Like other declare processes, output control may be established in two ways:

The first section of this chapter describes the output control declaration statements; the second describes how to use output control using the WITH OUTPUT CONTROL prefix. [See 7.1, 7.2.]

7.1  The Output Control Declaration

Output control is defined by a collection of statements known as "an output control declaration". The output control declaration consists of one or more "output control packets", each of which describes a piece of the output processing to be done. If you were using output control to write three different reports, you would probably have three output control packets, one for each report format that needed to be set.

The heart of an output control declaration looks like this:

Up to 36 packets may be defined in a single declaration. The output control statements are individually discussed below.

Each PACKET statement signals the start of another output control packet. The identifier value may be anything; there are no restrictions on it. The packets remain and hence are executed in the order in which they are defined; they are not sorted by the PACKET value.

If you are storing the declaration in a declare data subfile (probably the DATA OUTPUT CONTROL subfile), you need to add an ID statement at the top of the declaration:

where "gg.uuu" is your account number and "name" can be any alphanumeric name (it may include periods as well). [See 6 for more information on storing output control declarations.]

On the other hand, to define the output control declaration within a protocol, you need to surround it with the DECLARE OUTPUT CONTROL and ENDDECLARE commands:

Output Control Statements

Below is a description of each of the possible output control statements in a packet, in the order in which they would be stored in a declare data subfile. (The order in which you enter them is irrelevant, aside from the placement of the PACKET statements, described above.) Each of the statements is optional, though the occurrence or value of one statement might cause another one to be required.

First, here's a summary list of all the output control statements:

FORMAT = format-name;

This statement names the format that will be in control during the output for this packet. It will be set at the time the DECLARE statement is executed; startup frames will be executed as normal. However, unless the startup frame does something to call attention to itself (like allocating a global vgroup in shared mode; see CONTROL.OPTIONS below), the format, including the setting of it, is invisible outside of the declaration.

If no FORMAT statement appears, the format that is set at the time the output command (that is, the command with the WITH OUTPUT CONTROL prefix) is issued will be used. To explicitly request the standard SPIRES format, issue a CLEAR FORMAT command before issuing the output command. Note that vgroup sharing (one of the CONTROL.OPTIONS below) is automatically in effect for a packet with no FORMAT statement.

You may also specify a system format like $REPORT or $PROMPT. However, the options that you would specify on the SET FORMAT command after the format name may not be specified here; you must enter them in PARMS statements (see below).

USING.FRAME = frame-name;

You can use the USING.FRAME option to name a specific frame to execute within this output packet. The frame must also be defined with a USAGE of NAMED. See the Formats manual, section D.1.1.1 for more information; online, EXPLAIN USING FRAME COMMAND PREFIX.

PARMS = format-parameters;

These multiply occurring statements are the parameters you would normally specify on SET FORMAT (following the format name) and SET FORMAT * commands to set the format from command mode. For example, if you would issue these commands to set the format:

then in output control, your packet would include:

AREA = area-name;

Here you name the device services area to serve as the destination for this packet's output. In most cases, you would define and assign the area(s) to be used prior to the output control declaration (see the example below).

If you omit this option, the output is sent to the terminal.

An interesting twist for output control is to use the Subfile (SBF) area to use the output data directly as input for records into another subfile. EXPLAIN SBF AREA for more information on this technique. Note that exception file processing is available if needed; see the EXCEPTION.FILE statement, described below.

There are two ways to use the active file. If you are directing the output of only one packet to the active file, you can specify ACTIVE as the area. (In fact, you may specify ACTIVE for multiple packets to interleave the output from the different packets.)

However, if you don't want interleaving, use the second technique: define an area on the FILE area, and assign it to an active file; then name that area in the AREA statement. To define several areas on the active file, assign them to different active files (using the "new" option):

Use WYLBUR's SHOW ACTIVES ALL command to help you locate the output:

OUTPUT.OPTIONS = option1, option2...;

You can specify one or two of these traditional options:

TRACE;

This statement, which takes no value, turns on format tracing (SET FTRACE ALL) for the packet. Interleaving of trace data will appear if tracing is requested in more than one packet. You may use the SET TLOG command to send the trace data to a log file rather than your terminal.

WHERE = clause;

With this option you can specify a WHERE clause; only records that pass the WHERE clause criteria will then be processed by output control for this packet. This is useful when you need to create different subsets of records to be processed by different packets. Remember that the entire set of records being processed by all the packets in the output control declaration can also be limited by a Global FOR command with a WHERE clause.

In this example, output control will process all 1996 records, but only the January 1996 records will be processed by the January packet, etc.

Since you rarely write a WHERE clause as an element value, be careful to follow data entry rules for the standard SPIRES format when you add one to an output control packet.

FILTER = FOR element WHERE clause;

With this multiply occurring option, you can specify overlay filters for the packet that will augment any filters already set globally. Since the only type of overlay filter allowed in this context is a display filter, which is the default, you need not specify the filter type. [See 21.1.1.]

In the syntax, "element" is the name of the element being filtered; any element can be filtered, including a virtual or dynamic element. [See 20.] The element to be filtered can also be a structure, which is quite common.

"WHERE clause" is a clause following the same rules as a WHERE clause in Global FOR. [See the SPIRES Global FOR manual for more information on WHERE clauses.] Among other uses, the "WHERE clause" option lets you filter an element's occurrences according to each occurrence's value.

Note that the "(occ)", SEQUENCE and "IN limit" options of the SET FILTER command are not available for overlay filters. If you need to use them, they must appear on the first SET FILTER command for the element, which for output control would have to be issued before the output control declaration and which would thus apply to all the output control packets.

If the format in the output control packet contains SET FILTER Uprocs, they are executed after the filters specified here in the FILTER statement.

FOR.EACH = elem1, elem2, ...;

This option sets up output control's equivalent to the DEFINE DISPLAY SET command; the elements listed here have the same effect as the DEFINE DISPLAY SET command's element list. Basically, display sets create the effect of multiple records from one record when the named elements occur multiple times within it. [See 1.9.] This is typically used to create tabular output from hierarchical records.

The DEFINE DISPLAY SET command that is issued for you behind the scenes looks like this:

That means, in other words, that the FOR.EACH option has these effects:

Important: When the FOR.EACH option is used, only elements listed here and in the PLUS.ELEM list below are available for direct output in the packet. (But elements that are used in the creation of the elements in either list, e.g., in a virtual element's Userproc, do not need to be included in the list, unless you specifically need to use them in the packet as well.)

PLUS.ELEM = elem1, elem2, ...;

As noted just above, if you use the FOR.EACH option, you may need to use this option too. Elements listed here do not create more tabular occurrences of a goal record; they merely tag along in the "set". Only elements listed in either list are available to the output control packet when the FOR.EACH option is used (with the exception noted in the previous paragraph).

For example, if you have a FAMILY record, with three occurrences of the PERSON structure:

You might code the following if you wanted each PERSON to cause a separate occurrence of the record in the output and want the output to include the FAMILY and STATE elements too:

In standard format, the output created from the sample record would thus be:

Note that the COUNTRY element is not included, since it was not named in either list. [See 1.8.1 for an explanation of direct lists, the direct set equivalent of PLUS.ELEM lists.]

GEN.SET;

Besides using the single pass through the records to generate multiple types of output, you may also use it to generate a set, which might be useful if you need to sort the same group of records in a different way, or if you just need to save the set of records gathered here so that you can process them again in the future.

To do that, you must issue a DEFINE SET command outside and ahead of the output control declaration. Inside the output control declaration, you include an output control packet that includes GEN.SET and, optionally, a WHERE and/or multiple FILTER statements.

Structurally, adding set generation to output control in a protocol looks like this:

The result: besides creating the output defined in the first two packets, SPIRES also generates the set named "xxx".

You can include an additional WHERE clause in the GEN.SET output control packet, which has the effect of further restricting the records processed by the packet. Additionally, you can include one or more FILTER statements, which are treated as overlay filters of type SCAN, meaning that they will affect the set entries that are created. [See 21.1.] Aside from WHERE and FILTER statements, any other statements in a packet containing GEN.SET are ignored; so be sure to treat set generation as a separate output packet from any others.

CONTROL.OPTIONS = option1, ...;

You can request one or both of these options by coding the CONTROL.OPTIONS statement:

BYPASS.LIST = elem1, elem2, ...;

Used in conjunction with the GENERATE.CHANGES control option, this option names elements whose values should not be compared between copies of a record and hence not generate change information. In other words, if the values of these elements change between the tree and defq copies of the record, they will not generate changes. [See 9.]

EXCEPTION.FILE = orvyl.file.name [REPLACE];

If you use the SBF (SuBFile) area (see the AREA statement above) to direct output data into a subfile as input, you can code this statement to request exception file processing. For more information about exception files, EXPLAIN EXCEPTION FILE PROCEDURE.

Add the REPLACE option to request that SPIRES replace the data set, if it already exists, without asking you for permission to do so.

If several packets write to separate SBF areas, you must specify a separate exception file for each packet.

TABLE.NAME = table-name;

Output control can invoke tables that have been pre-declared with the DECLARE TABLE command. [See 17, 17.2.] This tool lets you in effect re-map a SPIRES subfile into one or more flat, relational tables. In this statement, you name the pre-declared table you want to use.

Note that if you use the $REPORT or $PROMPT format, the elements you would name in the PARMS statement would be "table" elements (that is, elements defined in the table declaration), not the primary subfile's elements. Note too that other set-related statements, such as FOR.EACH and PLUS.ELEM, are not used in the same packet as TABLE.NAME because they are relevant to normal sets, not sets that are generated via tables.

TABLE.WHERE = clause;

You add this statement if you want to filter the entries in the table with where-clause criteria. This is equivalent to adding the "SET FILTER FOR * WHERE clause" command in the table path prior to generating the table set. [See 17.2.] It limits the "row" output to create only rows that match the criteria expressed in the where clause. The elements named in the clause should be elements defined in the table declaration.

7.2  Using Output Control

Once the output control definition has been declared, you request output control processing by adding the WITH OUTPUT CONTROL prefix to an output command, either TYPE or DISPLAY.

Note that with this prefix you cannot also use the IN ACTIVE prefix to direct output that would normally go to the terminal to the active file instead. If you want output from output control to go to the active file, you must specify that within the output control declaration, as described in the previous section. [See 7.1.]

You can issue the CLEAR OUTPUT CONTROL command to cancel output control for the selected subfile. A new DECLARE OUTPUT CONTROL command will replace any output control declaration already in effect (unless the new one fails, in which case the previous declaration remains in effect).

Here is an example demonstrating how to use output control. On a daily basis, you use a subfile (OUR DEPARTMENT PEOPLE) to both create a report listing staff members and add any new staff members in that subfile into another subfile, called MY PHONE BOOK. A protocol to do that might include these commands:

In the first output control packet, the Daily.List report is set, with its output directed to the active file; after all the processing is over, the report is mailed to GQ.JNK. In the second, a WHERE clause restricts the records processed by the packet to just those that were "added yesterday". The Add.From.ODP format transforms a couple elements from the OUR DEPARTMENT PEOPLE records into the input for a MY PHONE BOOK record. The PHONEBOOK area, into which that data is directed, is defined on the SBF (SuBFile) area, which in turn passes the output to MY PHONE BOOK for input.

Note that the protocol takes advantage of the natural ordering of the records in Name order that exists in the NAME index by using the FOR INDEX command to establish the processing order of the records.

7.3  DATA MOVE Processing

DATA MOVE is the name given to a process whose task is to "move" data from a SPIRES "Source" hierarchical data base (Subfile) to one or more "Target" device areas. Typically, this is a process of creating external tables or other subfiles that have a table nature.

The DATA MOVE process is controlled by a SPIRES command called PERFORM DATA MOVE. The PERFORM DATA MOVE command is in turn under the control of information supplied by a SPIRES meta data record stored in the SPIRES Data Move subfile. [EXPLAIN PERFORM DATA MOVE COMMAND.]

Internally, DATA MOVE utilizes the OUTPUT CONTROL process to direct the movement of data from the Source subfile to multiple Target areas. These target areas may be any SPIRES Output Device Areas -- the ACTIVE file, OS or ORVYL files, or other SPIRES subfiles (SBF devices). The DATA MOVE meta data records provide the information needed by SPIRES to build an Output Control Declaration structure and to issue the command under Global FOR processing that will output the source information to the one or more target areas. [See 7, 7.2.]

DATA MOVE Subfile Information

Those familiar with SPIRES terminology and SPIRES tools will see and understand the DATA MOVE process most easily by studying the data values stored in DATA MOVE subfile records. The metadata record consists mainly of three separate structures that are used to control data movement and transformation. These structures: SOURCE.INFO, TARGET.INFO and TARGET.AREAS are described in detail below.

You should note that source data value descriptions and target data values are optional and may not even exist in some DATA MOVE records. In this case the source/target information is described by individual TABLE declaration records defined in a separate TABLE subfile. [See 17.1.] You can see an example of the setup and generation of Declared Tables and how they may be used by DATA MOVE. [See 7.5.]

RECORD Level Elements of the DATA MOVE Subfile

These values form the SPIRES information used by any system supplied meta-data record structure.

SOURCE.INFO Structure Elements

At this point you have a choice of doing Subfile output or Table output. Choose one of the following paths:

[See 7.3.1.]

[See 7.3.2.]

7.3.1  DATA MOVE for Subfile Output

Continuation of DATE MOVE processing. [See 7.3.]

For Subfile output, the SOURCE.INFO structure in DATA MOVE may contain:

SOURCE.VALUES Structure Elements

TARGET.AREAS Structure Elements

TARGET.INFO for Target Subfile output

The following information is given for each TARGET Subfile:

TARGET.VALUES Structure Elements

OUTPUT.CONTROL Packets

7.3.2  DATA MOVE for Table output

Continuation of DATE MOVE processing. [See 7.3.]

For Table output, the SOURCE.INFO structure in DATA MOVE may contain:

TARGET.AREAS Structure Elements

TARGET.INFO Structure Elements in DATA MOVE

TARGET.INFO for Table output

"table.options" consist of the following key words and their options:

OUTPUT.CONTROL Packets

7.4  DECLARE ELEMENT SUBFILE Description

DATA MOVE and DECLARE TABLE Source Values may be generated through Dynamic Elements. A convenient way to represent the dynamic element process is to use pre-defined dynamic element structures.

A DECLARE ELEMENT Subfile may be any SPIRES Subfile which is defined by the system $ELEMENT record definition in the RECDEF subfile. A subfile that is defined by $ELEMENT accepts the same data values as DECLARED dynamic elements [See 20.3.]

The DECLARE ELEMENT Subfile is referenced by the DECLARE.SUBFILE term in DATA MOVE and DECLARE TABLE processes.

DATA MOVE DECLARES -- The SPIRES System Declared Elements

A number of Declared Dynamic elements are currently available in the DATA MOVE DECLARES subfile. These have proven useful in data transformations from SPIRES to RDBMS tables. As time goes on we hope to build up this library. You reference this library by including:

Any one of the following system values may be given in DECLARE.KEY.

There are several other records in the DATA MOVE DECLARES subfile which you can view by selecting that subfile, browsing the goal records, and displaying records of interest.

DECLARE ELEMENT Subfile record example

The $NAME.FIRST process is coded as follows:

As you can see, the form of the internal data is the same as you would code for a DECLAREd dynamic element.

If you have coded the following SOURCE.VALUES structure:

DATA MOVE generates the following DECLAREd dynamic element.

7.5  DATA MOVE using PERFORM TABLE CREATE processing

This document gives you a sample SPIRES command stream that is intended to help you see and use the set of tools that have been implemented to enable you to generate Tables from SPIRES hierarchical data bases.

As an aid to your understanding, you should issue the EXPLAIN commands that are shown in the sample commands

The following File definition shows how to build a database for your own DECLARE TABLE and DECLARE ELEMENT descriptions. Usr_Tables will be used to store TABLE declarations. [EXPLAIN DECLARE TABLE COMMAND.] Usr_Elements may be used to hold your own Declared Elements. [EXPLAIN DECLARE ELEMENT COMMAND.]

Replace GENERATIONS by your own subfile name in these examples. Also, replace GP.USR with your own account. *names represent your account, so they don't need to be altered.

> select filedef

> display *usr_tables

Our sample database will be one holding Genealogy records

> select generations

> show element characteristics

It's a good idea to set up ELEMINFO data for each element especially "Width" and "Input-Occ" for multiple occurring data elements. This information will be carried over to the tables that are generated.

> show elem info

Here is how to generate TABLE records in your $Table subfile. [EXPLAIN PERFORM TABLE CREATE DECLARE.]

> perform table create declare subfile generations, type sybase, options mult, dest Usr_Tables

> select usr_tables

> show subfile transactions

> display *generations

Note: I have removed a number (four) of generated column names

A separate table has been created for the "Child" data element since there may be any number of children. The "Child.Occ' column was generated to hold the occurrence number of a particular child.

> display *child

A number of Declared elements were called in by these table descriptions. These element transformation structures are in the system DATA MOVE DECLARES subfile. [EXPLAIN DATA MOVE DECLARES SUBFILE.] You can build your own database for this purpose (see the definition for Usr_Elements above) or store your Declared elements in the system subfile.

If you are going to move the tables to an external RDBMS database you have the ability to generate DDL statements from your Declared Tables. [EXPLAIN PERFORM TABLE CREATE DDL.]

> perform table create ddl subfile generations from usr_tables

> list unn

In order to extract table data we must build a DATA MOVE record. [EXPLAIN PERFORM TABLE CREATE DATA MOVE.]

> perform table create data move subfile generations from usr_tables

You can [EXPLAIN DATA MOVE PROCESSING.] to see a description of the following DATA MOVE record fields.

> list unn

You should not expect that this record is in its "final" form. An ensuing DATA MOVE request may not complete successfully because there is not enough space allocated on the OS volume. This problem may be caused by the fact that you are extracting more data than the OS default allocation gives you (16 extents).

It is a good idea to get an estimate of the amount of space needed. You can issue the PERFORM DATA MOVE command with a "COUNT" value (e.g. COUNT = 1000) [EXPLAIN PERFORM DATA MOVE COMMAND.] and then extrapolate the resulting total length values for each table based upon the total number of records to be extracted. You should then replace the "lrecl=value" field in the "Device.Options" data value with something like "MBYTE=value". [EXPLAIN ASSIGN AREA COMMAND.] and view the "tracks=n" option.

Now let's Add the above DATA MOVE record to the system subfile and perform the DATA MOVE to generate the tables.

> select data move

> add

> perform data move from *generations_table_move for subfile

The default output is tab delimited data with a header line.

> use wyl.gp.usr.generati.ons clear

> ch x'05' to '|' nol

> list unn

8  External Files

A SPIRES file can be defined in such a way that one of its record-types is an External record-type. This is done by coding the element EXTERNAL-TYPE in the record-type definition. A subfile which has an EXTERNAL goal record-type has some properties that are quite different from other SPIRES subfiles. This subfile will have a look similar to normal subfiles in that its Deferred Queue can be used for retrieval and update but the "tree" portion of the subfile is "external" to SPIRES. That is, the "tree" is not in an SPIRES RECn data set. Rather it is located either on a SPIRES Device or on some medium foreign to the SPIRES environment.

This "foreign" information source refers to any source of information that can be moved in some fashion into a SPIRES Device area. This data could come from a WYLBUR data set, or a remote database accessed through a "perl" script that creates a WYLBUR data set. Whatever its source we will refer to it in this document as "Remote" data.

Accessing External Subfile Data

Data from the external "tree" of a subfile is accessed through normal SPIRES Device Services areas through the use of Formats which transform the information in exactly the same way that INPUT formats do. In fact INPUT formats are used to transform the information from its external form into the SPIRES internal record form. Thus the DISPLAY of a record of an external subfile involves extracting that record from the external device via an input format prior to presenting it to its final destination, possibly via an OUTPUT format.

Similarly, the movement of a record to an external device (the external "tree") is done using an OUTPUT format which converts data from its internal SPIRES form to the form that must be presented to that external device.

Advantages of External Subfiles

You may ask why anyone would want to access and/or store data in this manner when SPIRES has always had the capability to move data to and from device Areas such as the ACTIVE file or other SPIRES files and more recently through the use of OS files. Several advantages come to mind.

You can define and manipulate external data in ways that take advantage of the generality, variability and power that is inherent in data stored in a SPIRES subfile. In simple terms you can deal with external data described by the SPIRES record definition language rather than dealing with it through more rigid Vgroup definitions.

You have the ability to access external data in its context as a subfile, using SPIRES single record display, search commands and sequential scan commands. You are also able to filter the data through normal SPIRES filters, to sequence the data, to access it through Paths and subgoals and to transform it with Output formats or $REPORT, in effect acting upon that information with the full capability of SPIRES.

The Deferred Queue is unique to SPIRES and has a number of properties that can also be used to advantage with external subfile manipulation.

For example, the Defq may be used as a place holder for updates to the external subfile and used to increase efficiency.

The Defq may be used for its power in conjunction with Transaction Group processing and record locking activity.

Data that is currently stored in SPIRES data sets as SPIRES subfiles may be moved to other remote platforms for various reasons. Perhaps data acquisition and maintenance can be better achieved there. Converting the subfile from its current SPIRES form into an External subfile could greatly simplify and smooth the transition.

[See 8.1.]

8.1  External File Record Processing

1. External Record Retrieval

The actual process of record retrieval from an external subfile consists in two or three phases of activity. The optional first phase is essentially a non SPIRES phase in that SPIRES itself does not have control of the process. SPIRES does have control during the second phase though the external file definer controls the type of activity that takes place. The final phase is totally under SPIRES control.

This phase consists of the processes needed to move remote information into a SPIRES medium, a Device Area known and understood by SPIRES, with a possible transformation of that data into a form that is to be read by a SPIRES format.

The execution of this phase is under SPIRES control but the subfile definer provides information to govern that control (through the EXTERNAL DATA declaration). Data on the SPIRES device is transformed by an input format and presented to the SPIRES environment as a subfile record. The example of USEing a Wylbur data set above in phase 1 could be eliminated through the use of the OS FILE Device AREA.

This phase mimics the normal SPIRES process once a subfile record has been accessed in a "tree" dataset. The accessed subfile record in its internal form is presented to the calling SPIRES process (FIND, DISPLAY, Global FOR etc.)

2. External Record Updating

The preceding discussion has been geared to the activity of retrieving records from an external subfile. The updating of external information involves processes similar to those described for the phases of retrieval.

The phases of external file update activity is the reverse of the phases of retrieval activity.

This phase of the update activity is totally controlled by SPIRES and concludes with the generation of a subfile record to be added, updated or removed.

This phase consists of two distinct types depending upon the Control information specified by the external subfile definer. These types are the following.

Whether the Data Output phase is Direct or Deferred the data must be transformed into its external form through an OUTPUT format like $OUTPUT or a subfile definer designated format.

This phase may be modified by the subfile definer by setting control information for the commands which are used to cause data to flow out to a Device Area (and to a remote medium if one is specified).

This phase is triggered by a SPIRES or user provided process that can transform and/or move the data to its remote destination.

[See 26, 8.2.]

8.2  External File Data Declaration -- the EXTERNAL subfile

SPIRES provides a metadata structure developed in conjunction with External subfile support. The purpose for this model is to hold all of the information that the External subfile definer needs to control SPIRES interaction with his or her data. If the subfile definer utilizes the features made available through this structure he/she should find that it will be much easier to interact with the data, and the subfile's user community will find that their interaction with the data is the same as if the primary data source were a SPIRES data base.

Any permanent external subfile must be defined with the EXTERNAL-TYPE data element coded within its goal record definition. EXTERNAL-TYPE may be coded as a null value (EXTERNAL-TYPE;) which indicates that the EXTERNAL DATA information will be DECLAREd dynamically or it may contain the key of a record in the EXTERNAL system subfile (e.g.. EXTERNAL-TYPE = *External-data-record;). The data elements that currently make up the External data package are as follows:

[See 8.3.]

8.3  External File Control Data

SPIRES provides a number of mechanisms to aid in the processing control of data movement from its external to SPIRES internal form (data retrieval), or from its SPIRES internal form to external form (data update). The type of control provided and the timing of that control is determined by the values coded for the various data elements in the External Data Declaration structure shown above. You could circumvent these mechanisms by providing alternative methods but there should not be good reasons to do this. SPIRES external subfile support provides the best and most efficient processing in a manner that most closely approximates the feel to which its users have become accustomed.

External Record Retrieval

Control during Retrieval Acquisition

This phase consists in the acquisition of data from the remote data medium and its movement into the SPIRES Device Area. The data element values that control this activity are the following:

Control during Retrieval Input

SPIRES must be told how to locate, transform and process the external data. Information must be supplied by the external subfile definer for each of these three aspects of processing.

Data transformation information -- SPIRES must know the name of the format to use to transform data from its Device Area image into its SPIRES internal record image. This format may be a system supplied format coded to transform data on the Device Area in one of a few standard forms (e.g. $INPUT, $RDBMSCAN, etc.). This format is established at the time of subfile Select, kept hidden (thus not interfering with user supplied formats), and cleared when the subfile is cleared. The data elements involved are:

Command processing control information -- SPIRES must be told what activity it is to perform for various kinds of commands. Note that this control information may be used only if the "tree" of the external subfile is to be accessed. If an external subfile record is found in the Defq during a DISPLAY request then this control information has no effect.

The following information is used for this purpose:

Command processing control examples

Summary of Control Activity

SPIRES provides convenient and straightforward means to control External subfile Acquisition and input processing. Some of this control is very loose and dynamic and open to extensive change by the user. Other control is tightly maintained and may be changed only by the subfile owner or by some SPIRES system process.

External Record Updating

Control during Update Output

Like the Retrieval Input control aspects, the Update Output processing involves the TRANSFORM structure along with COMMAND control options that are exercised during specific update activity. Because of data integrity considerations SPIRES must also enable control of lower level operations such as record locking, commit and decommit.

Command processing control information -- SPIRES expects external subfile control information during any subfile update request. The subfile definer determines the timing and the type of activity that is to take place,

Control during Data Storage Phase

This phase is triggered by a SPIRES or user provided process that can transform and/or move the data to its remote destination. See the description above concerning control during retrieval acquisition concerning the RDBMS structure and the PROCS and LIBRARY data elements.

[See 8.4.]

8.4  External File Data Declation -- Examples

The following sample File Definition will be used to provide an example of how one might use this facility to access and even update data in a remote RDBMS database. This simple file definition is as follows:

The Record definition defined by "GQ.DOC.SYB.PATIENTS" forms a structure which matches a table that is to be accessed through the use of a record in the EXTERNAL subfile whose key is "GQ.WCK.PATIENTS".

This EXTERNAL record is as follows:

This record is set up to access a particular SYBASE table named "patients" accessed through the given Server/Port shown. The RDBMS table has a SPIRES equivalent defined by the RECDEF. The table structure can be shown as follows:

You may use the above search terms in an "external" find request.

Note that since the Search COMMAND.OPTIONS specifies CLEAR, SPIRES will ZAP the DEFQ at the time the search is issued. The external search request is sent to the remote host and the keys of the search result are returned.

Since the command to access SINGLE records specifies "LOAD, NEW", SPIRES retrieves the records and loads each of them into the Defq along with displaying them. The input format $tbtf.read deals with the conversion of the column values from each row into the data element values as seen in the TYPE request.

You can also cause a record to load through DISPLAY.

9  Change Generation

In the client server environment, data is frequently moved or, rather, copied from hierarchical SPIRES records into flat tables in a data base (e.g., a Sybase database) on a different machine. The master record currently resides in SPIRES, where it is updated by Prism users or other mainframe processes. But the updates need to be applied not only to the SPIRES file, but also to the server data base.

Records added to the SPIRES subfile are easily handled by the tables: the table entries are generated, and inserted. Removed records are relatively simple too: the table entries for the tree copy of the removed record are generated, and then passed to the tables marked as deletes. But updated records are more complicated, combining the add and remove procedures: the table entries for the tree copy are generated and then passed to the tables as deletes; and then the table entries for the defq copy of the record are generated, and passed to the tables as inserts. Obviously, if only one element value is changed in the updated record, a lot of unnecessary inserts and deletes would be generated.

With the feature called "change generation", SPIRES can help sort out the update data, returning only the changes that need to be made to the tables. This can save a great deal of processing time on the server, radically reducing the number of table updates that need to be done to keep the table data in synch with the SPIRES records, as well as reducing the amount of data that needs to be schlepped from the mainframe to the server.

The rest of this introduction is devoted to an example that may help clarify these points. The rest of the chapter, starting with the next section, describes how to use this feature. [See 9.1.]

Suppose you have this source record and its update:

When you first copy the data of the original record from SPIRES to flat tables, you flatten it into these records, as new records to add:

But after the record is updated in SPIRES with the new data, shown on the right above, you want the two tables to reflect that data too, with the entries in the tables ending up like this:

The question is, how much do you need to pass to do the update of the tables? Do you want to pass all the data from the original record, marked as data for removal from the table, followed by all the data from the updated record, to be added?

Deleting all of the table entries for the old copy and inserting all of them for the new copy can lead to much unnecessary work. Because there are several changes between the old and new versions of our sample record, there isn't much unnecessary work there: only the deletion and insertion of the Veran child isn't necessary. However, in a larger record, with only a single piece of data being changed, this method of handling changes could lead to an enormous number of unnecessary data updates to the tables.

The alternative that SPIRES offers is "change generation". As it does when it determines what indexes need updating when records are changed, SPIRES will examine the tree and defq copies of a record, determine what has changed, and only report those changes back. In the above example, SPIRES would generate only the changes that needed to be made to the table, eliminating the changes that would cancel each other out:

The rest of this chapter describes how to set up change generation for your application.

9.1  The Change Generation Procedure

To make change generation work for your application, you need the following:

How It Works

Understanding the process of using the change generation feature will help you understand exactly what the feature can do for you. Chances are that you will do these steps in a protocol, but here we'll demonstrate it as if you were doing it interactively. At the end, we'll discuss the changes you would make to use change generation with output control in a protocol.

Step 1: Selecting the source subfile

Select the source subfile.

Step 2: Selecting the changes subfile

Through a path, select the changes subfile.

Step 3: Set up the SBF area

Define an area on the SBF (SuBFile) area, and then assign that area to the path, naming the path you just opened.

Step 4: Set the output format

Set the output format written for the source subfile that generates input for the changes subfile.

Step 5: Set up Global FOR

Establish Global FOR, choosing the class of records you want to examine for changes. [Not all Global FOR classes make sense in this context; in particular, do not use FOR TREE, since that rules out examination of the deferred queue completely.] Chances are, you want to work with all the deferred queue data. The best way to establish that is shown in the example below:

Note: the DEFQ class under Global FOR usually does not include removed records, but it does include them under change generation.

Step 6: Define a display set

The elements that are to be passed from the source subfile to the changes subfile are identified in the format set in step 4. In this step, you define a display set that will flatten the hierarchical SPIRES records into the table entries you need to create. This step must include the names of all elements being passed, essentially listing all the elements in the source record that were named in the output format of step 4. This is also where you add an option that requests that the generated entries be limited to only those that represent changes, using either the CHANGES option on the DEFINE DISPLAY SET command or specifying GENERATE.CHANGES as a control option in the output control packet. [See 1.9, 7.1.]

The elements that will determine the number of set entries should be specified in the ELEMENTS list on the DEFINE DISPLAY SET command (or in the FOR.EACH statement in the output control packet); other elements whose data you want to pass should be included in the "+ elements" list (or in the PLUS.ELEM statement in output control. So, for instance, using part of the record structure of the example in the previous section:

you would define a display set that generated table entries for the Address table based on the ID and ADDRESS-TYPE elements, since you want an Address entry for each ID and ADDRESS-TYPE combination in the record. You would then add the NAME and STREET elements as "plus elements" to be passed along to the table.

Remember, all elements from the source subfile that you refer to in the output format must be named here as well. Another reminder: if it's appropriate, and it usually is, don't forget to include the TV=ALL option to ensure the inclusion of all occurrences.

Step 7) Generate the changes

Use the GENERATE SET command to generate the changes, adding them into the changes subfile:

To get an idea of what records have been generated, see "Continuing the Example" below.

Using Output Control with Change Generation

It is undoubtedly more common for a change generation procedure to be constructed using output control.

Changes you would make to the above procedure would probably include:

Continuing the Example...

When you complete your work, SPIRES will have created records in the changes subfile based on the changes to the source records.

If you compare those records to the changes we identified in the previous section that would need to be made to the tables, you'll see they are basically the same:

The notable exception is the appearance of the DELETE element, an element whose existence indicates whether the data is meant to be deleted (representing the old, removed data) or added (the new, updated data).

Your next step would be to take the newly added records in the changes subfile and massage them into input for the tables you want to update. It would be quite easy, for instance, to write an output format that converts these change records into INSERT or DELETE commands for SQL.

In the next section of this chapter, we'll cover aspects of the design of the changes subfile. That will include what was needed to make the DELETE element work. [See 9.2.]

(*) Technical Details of How SPIRES Generates the Changes

For anyone interested in the specific details of how this works, here is an explanation of the steps taken by SPIRES, record by record, when change generation is triggered:

1. Retrieve the latest copy of the record of the Global FOR Class.

2. Process the record according to the DEFQ transaction type:

3. Retrieve the tree copy of this record if the transaction type was UPDATE or REMOVE.

4. Process the tree copy as follows:

5. SPIRES generates the two sets of core entries if any exist. First $DELETE is cleared and the entries generated from the DEFQ copy of the record are moved to the output device. Next $DELETE is set and the entries generated from the TREE copy of the record are moved to the output device.

6. Proceed to step 1 to access the next record.

9.2  The Changes Subfile

As the description of the change generation procedure explained earlier [See 9.1.] the changes subfile will serve as a staging area for moving data from the SPIRES subfile to the target data base, which is usually one or more tables in another DBMS. From the changes subfile, you may create records in the appropriate format for the target DBMS or perhaps SQL statements such as INSERT and DELETE, using custom SPIRES formats or the DEFINE TABLE facility.

The changes subfile may be designed however you like, since it is your tool, for your convenience. You may want to place all the changes from one SPIRES subfile in a single changes subfile, or you may want to generate changes into a different changes subfile for each target table. You may decide that the data in the changes subfile is worth keeping in SPIRES for awhile as a transaction log; or you may decide to make it part of a temporary file that exists only for the duration of the change generation process.

One key element that you will want to add to your changes subfile is a flag that signals whether the data of the change record is new data that needs to be inserted into the target DBMS or is old data that needs to be removed from it. Here is one easy way to set a "delete" signal element up.

In the technical details of how SPIRES generates change records, described in the previous section, you may have read about how SPIRES generates change records from the tree copy of removed and updated records in the source subfile with a system variable called $DELETE set; it is not set when change records from the defq copy of new and updated records are generated. Since the records are added into the changes subfile as they are generated, the $DELETE flag remains set, if it was set.

So when the change records for the source record's tree copy are added into the changes subfile, the $DELETE flag is set. In the changes subfile, you have created an element whose definition looks like this:

Action A130, an Inclose rule, is designed to test the value of $DELETE; if it is set, then A130 assigns a zero-length value to the element. If it is not set, no value is assigned, and hence the element does not occur. That means that change records generated from the tree copy of a record from the source subfile will have an occurrence of the DELETE element; those generated from the defq copy will not. Whatever you use to format the change records for input to the target DBMS can thus use the DELETE element as a signal for whether the change represents data to be added to the DBMS or removed from it.

Because you can test the $DELETE flag yourself either in the source subfile's output format (the one that creates input for the changes subfile) or in Userprocs called from INPROC strings of the elements in the changes subfile, you may develop your own way to create a flag element like this one.

10  XEQ DATA Processing

A new SPIRES facility called XEQ DATA is being made available as a means of offering applications programmers a radically new method of control for certain applications. It is believed that the use of this new technique could result in greatly simplified code as well as offering a much more efficient processing environment for these applications.

This new technique is probably most useful to those applications which use meta-data records to hold the basic information that controls the flow of the application program. There may be other useful opportunities for the use of XEQ DATA concepts but these have not yet been examined.

XEQ DATA processing forces you to think in new ways about the flow of control in an application. You are to consider that the meta-data record itself along with its structures and data values are the primary means of control. This type of processing by the way, is not new to SPIRES. The SPIRES Compiler has used this technique to compile meta-data records ever since its inception.

This document will describe the various components brought together to provide this new service -- the MSEMPROC and XSEMPROC actions which provide the control, and the XEQ DATA commands which have been built to trigger the processing.

[See 10.1.]

10.1  The Meta-Data record structure and XSEMPROC Actions

A number of SPIRES applications have already been built that utilize a meta-data subfile to hold information that is used to control the flow of the application or to control a part of an application. These records might contain structures that have data that is pertinent to a particular individual or to a certain type of activity that might be performed. The data could be quite complex and variable in its structure, and has to be extracted from the record by a SPIRES format, and stored into multi-dimensioned arrays before the program can begin to operate on its contents.

XEQ DATA processing eliminates the need for the extracting format and the multi-dimensioned Global Vgroups through the use of XSEMPROC actions which are stored with the database itself and dictate how the data is to be used and which portion of the application is to execute and in what order.

Two new data elements must be coded in a meta-data record definition to produce these results. The MSEMPROC element is to be coded in the record level structure and XSEMPROC elements are coded in lower level structures.

MSEMPROC and XSEMPROC values are coded in the same manner as INPROC, OUTPROC values, in that they are made up of strings of Action codes with P1, P2 and P3 values. These actions generally name a data element of the meta-data record and specify how the data values are to be used to control the application.

Section [See 10.4.] of this document describes the Actions that are available for this purpose. That description, along with the XEQ DATA example given in [See 10.5.] and [See 10.6.] should provide enough information to get the general idea of how this feature is used.

[See 10.2.]

10.2  The SET XEQDATA Command

You can tell SPIRES that a particular subfile is to be used in the Xeq Data process by issuing the command:

This command is similar to the SET XEQ command in that the subfile designated (or the currently selected subfile if no name is given) is set aside for this particular use. Note that only one XEQDATA subfile can be in in effect at a time. The SET XEQDATA process will clear out any preceding subfile for XEQ DATA purposes.

You may clear out an Xeq Data subfile in the same manner as you clear out the XEQ subfile.

[See 10.3.]

10.3  The XEQ DATA Command

You activate the XEQ DATA process by issuing the XEQ DATA command. This command has the following form:

Where the given Key-value specifies the key of a meta-data record within the current XEQDATA subfile. This command should be issued within a protocol, which normally has locally defined variables and labels that are referenced by the MSEMPROC / XSEMPROC actions. The XEQDATA subfile is called "tcontrol" in these examples. [See 10.5.]

The flow of control at this point is as follows:

[See 10.4.]

10.4  XsemProc Action Descriptions

Action A101 :P1, P2 , P3

Action A102, P2

Action A103, P2

Action A104 :P1, P2 , P3

Action A105 :P1, P2

[See 10.5.]

10.5  Xeq Data Sample Meta-Data Subfile Definition

The following record definition is similar to the Declare Output Control description. This will be used to show how multiple reports can be output and controlled by the data. A sample record from the XEQDATA subfile is also shown.

A. Subfile Record Definition (FILEDEF/RECDEF) for the XEQDATA subfile

B. Sample Meta-Data record within the XEQDATA subfile

[See 10.6.]

10.6  Xeq Data Sample Protocol

Declare Vgroup Local
  var FormatName;  Len 32;
  Var = ParmVals; length 132; occurs = 16;
        Indexed-By = ParmNum;
  var = ParmCnt; type int;
  Var = ParmNum; Type int;
  Var = FrameName; Len 32;
  Var = AreaName; Len 16;
  Var = Options;  Len 32;
  Var = WhereVal; Len 132;
  Var = FilterVals; Len 132; occurs 16;
        Indexed-By FilterNum;
  Var = FilterCnt; type int;
  Var = FilterNum; type int;
  Var = SetTrace;  Type flag;
  Var = Prefix; Len 64;
  Var = ProcessError; Type Flag;
  Var = DataKey;  Len 64;
    Value = '*outc';
Enddeclare

++Init
  * Begin Output Control process
   If $Ask Then Let DataKey = $Ask
  - DataKey defaults to *outc if $Ask is null.
   select tcontrol
   set xeq data
   define area areax(1,80) on file
   define area areay(1,80) on file
   assign area areax to filex edit,replace
   assign area areay to filey edit,replace
   /xeq data #DataKey
   If $No Then Jump Error
   close area areax
   close area areay
  Return

++Start.Report
  /* Begin processing reports for $Parm
  Return

++Select.Subfile
  /Select $parm
   If $No Then Jump Error
  Return

++New.Report
  /* Begin processing for report $parm
   Eval $Vinit(Local)
  Return

++Process.Report
  If #FormatName Then Begin
    If #ParmVals::0 Then Let ParmVals::0 = ', '#ParmVals::0
    /Set Format #FormatName #ParmVals
    If $No Then Jump Error
    Let ParmNum = 1
    While #ParmCnt > #ParmNum
      If #ParmVals::I Then Begin
        /Set Format * #ParmVals::I
        If $No Then Jump Error
      Endb
      Let ParmNum = #ParmNum + 1
    EndWhile
  Endb
  If #WhereVal Then Let WhereVal = 'Where '#WhereVal
  /For Subfile #WhereVal
  If #AreaName Then Let Prefix = 'In ' #AreaName ' '#Options
  If #FrameName Then Let Prefix = #Prefix ' Using '#FrameName ' '
  /#Prefix Display 10
  If $No Then Jump Error
  EndFor
  Return

++Error
   * Processing Error - Reporting terminated.
   Let ProcessError = $True
  Return

++End.Report
  * Finished Processing Reports
  Return

11  Input Control

A SPIRES feature called Input Control lets you produce SPIRES goal records of a single subfile from multiple streams of input in the form of Relational Data base tables.

The process is designed to create SPIRES record level data elements or structural data element occurrences from multiple "flat files" -- that is, files in the form of relational tables -- made up of multiple "rows", each row consisting of one or more "columns" of information. Additionally, the various streams of "table" input must be ordered and structured in a precise way if reasonable hierarchical SPIRES record structures are to be realized.

In this respect Input Control is much more restrictive than Output Control which may be used to generate multiple streams of output of extremely variable forms. [See 7.]

Input control processing is established through a DECLARE command, DECLARE INPUT CONTROL. Like other declare processes, input control may be established in two ways:

The first section of this chapter describes the input control declaration statements; the second describes how to use input control using the WITH INPUT CONTROL prefix. [See 11.1, 11.2.]

11.1  The Input Control Declaration

Input control is defined by a collection of statements known as "an input control declaration". The input control declaration consists of one or more "input control packets", each of which describes a piece of the input processing to be done. The input processing expectation is that the input stream represented by a particular packet will be tabular in nature and will read and converted to represent a portion or all of a particular structure or record level occurrence of a SPIRES goal record.

The heart of an input control declaration looks like this:

Up to 36 packets may be defined in a single declaration. The input control statements are individually discussed below.

Each PACKET statement signals the start of another input control packet. The identifier value may be anything; there are no restrictions on it. The packets may be defined in any order but will possibly be executed in a different order based upon the structure of the destination goal record.

If you are storing the declaration in a declare input data subfile; you need to add an ID statement at the top of the declaration:

where "gg.uuu" is your account number and "name" can be any alphanumeric name (it may include periods as well).

On the other hand, to define the input control declaration within a protocol, you need to surround it with the DECLARE INPUT CONTROL and ENDDECLARE commands:

Input Control Statements

Below is a description of each of the possible input control statements in a packet, in the order in which they would be stored in a declare input data subfile. (The order in which you enter them is irrelevant.) Each of the statements is optional, though the occurrence or value of one statement might cause another one to be required.

First, here's a summary list of all the input control statements:

FORMAT = format-name;

This statement names the format that will be in control during the input for this packet. It will be set at the time the DECLARE statement is executed; startup frames will be executed as normal. However, unless the startup frame does something to call attention to itself (like allocating a global vgroup in shared mode; see CONTROL.OPTIONS below), the format, including the setting of it, is invisible outside of the declaration.

Currently the FORMAT statement is required to be present.

At present there has been but one format utilized for any input control process -- the SPIRES standard $INPUT.CONTROL format. If you choose to write your own format however, the options that you would specify on the SET FORMAT command after the format name may not be specified here; you must enter them in PARMS statements (see below).

USING.FRAME = frame-name;

You can use the USING.FRAME option to name a specific frame to execute within this input packet. The frame must also be defined with a USAGE of NAMED. See the Formats manual, section D.1.1.1 for more information; online, EXPLAIN USING FRAME COMMAND PREFIX.

PARMS = format-parameters;

These multiply occurring statements are the parameters you would normally specify on SET FORMAT (following the format name) and SET FORMAT * commands to set the format from command mode. For example, if you would issue these commands to set the format:

then in input control, your packet would include:

AREA = area-name;

Here you name the device services area to serve as the source for this packet's input. At present the only device area type that has been utilized is the subfile device (SBF) assigned for input. The input data is input directly from a subfile path which has been defined as if it were a Declared Table. You must define and assign the area(s) to be used prior to the input control declaration (see the example below).

TRACE;

This statement, which takes no value, turns on format tracing (SET FTRACE ALL) for the packet. Interleaving of trace data will appear if tracing is requested in more than one packet. You may use the SET TLOG command to send the trace data to a log file rather than your terminal.

CONTROL.OPTIONS = option1, ...;

You can request the one current option by coding the CONTROL.OPTIONS statement:

TABLE.NAME = table-name;

Input control can invoke tables that have been pre-declared with the DECLARE INPUT TABLE command. [See 17.3.] This tool lets you in effect re-map a SPIRES subfile of tabular data into a set of data elements in the destination SPIRES subfile. In this statement, you name the pre-declared input table you want to use.

11.2  Using Input Control

Once the input control definition has been declared, you request input control processing by adding the WITH INPUT CONTROL prefix to an input command, for example INPUT ADD, INPUT ADDMERGE etc.

But input control comprises much more than this single command. The setup for any usage of this process requires knowledge of a number of commands and principles.

Input Control processing is much more restrictive in many ways from Output Control processing. Those who are familiar with Output Control are aware of the wide range of possibilities that it gives, even in it's "hidden" forms -- such as that utilized by DATA MOVE.

Input Control must be by its very nature a highly controlled process. The final object of the process is to create a specifically defined SPIRES goal record from a set of specifically defined SPIRES tables. These tables are read and transformed by a specifically written SPIRES format which expects the incoming data to be sorted based upon the relationship of structures within the destination goal record definition.

Since data setup and processing is so restrictive we have built a set of SPIRES commands which should go a long way to helping you achieve your final goal -- that of building final form SPIRES records from multiple streams of related RDBMS tables.

The best way to present these commands is in the form of a tutorial showing the building of simple SPIRES database records from multiple tabular data.

We will use a clone of the ubiquitous PATIENTS subfile as the destination subfile. This is a simple file with one multiply occurring structure. All of the data elements except ADDRESS are singly occurring as shown below. ADDRESS however has the ELEMINFO element INPUT-OCC = 2 which will be used when generating Table structures.

We have set up a database to hold Table definitions that will be used by Input Control:

Now we can generate tables that match the structures for this database. Several PERFORM commands are used in the following presentation. [EXPLAIN PERFORM TABLE CREATE INPUT COMMANDS.] for more information.

To see the table record statements resulting from this command [See 11.2.1.]

From the Declared Tables generated for output we can generate a set of Input Tables as follows:

To see the input table records resulting from this command [See 11.2.2.]

If you compare the Input Table records to the corresponding records for output tables you can see that they are very similar in appearance. The primary difference is that the SOURCE... statements in output tables have become DEST... statements in input tables.

Input Control will use these input table constructs in its work via a system format to extract source column data from the table subfiles to store Patient.records subfile data elements (DEST.ELEMs) based upon any Patient.records structure (DEST.STRUCTURE) information.

At this time it is necessary to say that it will probably be necessary to modify either the output table declaration or the input table declaration or both, depending upon the characteristics of the destination goal records.

The next task is to construct table subfiles which may be used to describe (and contain) the source data for input. The following command should go a long way to accomplishing this task since it builds a record structure based upon the input table declaration.

To see the table record definitions resulting from this command [See 11.2.3.]

There is another PERFORM TABLE CREATE INPUT command which will generate a protocol that shows the steps needed to actually perform the input. The commands generated must be modified for your particular situation and of course data must be moved into the table subfiles that serve to provide source data.

To see the protocol statements resulting from this command [See 11.2.4.]

If you look carefully at the protocol in Sample 4, you may see some commands that are unfamiliar to you. You are probably unfamilar with this command construct:

The "input" option is now available for the SBF device processor. This option was developed for this specific use during input control. Also, note that DECLARE INPUT CONTROL utilizes a specific system format $INPUT.CONTROL along with the "share.vgroup" control option. [See 11.1.]

There are also some steps omitted from the sample, primarily where did the subfiles "SUBF.PATIENTS" or "SUBF.VISIT" come from?

These subfiles represent the SPIRES view of the tables that are to be read by the WITH INPUT CONTROL ADDMERGE command. It is up to you to define and populate these subfiles (and rename them if you wish) with data from whatever source you wish. The SUBF table subfiles may be defined by the RECDEFs shown in Sample 3 above. If the tables are RDBMS tables -- say from ORACLE or SYBASE relational systems then the population can be done through the VIA EXTERNAL FIND command.

We have chosen to simplify this example by populating our tables from the existing PATIENTS data base and we will set up our environment to show the actual activity of input control.

Our record level subfile is constructed as follows:

And our Visit structure subfile looks like this:

Note that input control will read data from each table subfile in the order that the table records appear sequentially in the subfile. The order is of paramount importance. This is why the RECDEF's have fixed length structured keys. SPIRES utilizes this method to arrange the order of input. The number of data elements forming the structured key increases as the depth of the final destination goal record structure increases.

You can look at the actual execution of this protocol by issuing the command: [See 11.2.5.]

11.2.1  Input Control commands -- Sample 1: Declare Tables

In order to understand the context of this example [See 11.2.]

These commands show how to generate Declare Table structures for the sample PATIENT.RECORDS database. [See 17.1.]

11.2.2  Input Control commands -- Sample 2: Declare Input Tables

In order to understand the context of this example [See 11.2.]

From the Declared Tables generated for output we can generate a set of Input Tables as follows: [See 17.3.]

11.2.3  Input Control commands -- Sample 3: Input RECDEF definitions

In order to understand the context of this example [See 11.2.]

Record Definitions may be generated from Declare Input Tables.

11.2.4  Input Control commands -- Sample 4: Input Control protocol

In order to understand the context of this example [See 11.2.]

The following command can be used to get you started with the task of running an input control process.

11.2.5  Input Control commands -- Sample 5: Processing

In order to understand the context of this example [See 11.2.]

The following protocol statement execution demonstrate the final phase of input control. The first two complete goal records are displayed.

12  The REFERENCE Command, Partial FOR

The REFERENCE command can be used to bring a record from the selected subfile (or attached file) into main memory. Once there, element values can be accessed and placed into variables with the $GETxVAL functions. The REFERENCE command is also used in Partial FOR processing, which will be explained in the upcoming manual "Partial Record Processing in SPIRES".

The syntax of the REFERENCE command is:

where "key" is the key of the record to be brought into memory from the file. The REFERENCE command can also be used under Global FOR processing, in which cast the syntax is:

Since only one record can be "referenced" at a time, using the value "n" means that you want to reference the "nth" record from the current one. The options FIRST, *, NEXT, "n" and LAST are standard Global FOR options. If none of them is used, NEXT is assumed, and the next record in the class that fits the WHERE clause criteria, if any, is referenced.

The NOUPDATE option tells SPIRES that the record is being referenced only for data element retrieval and display purposes -- the record will not be updated using partial processing techniques. If secure-switch 10 is set for the subfile, the same record cannot by default be referenced simultaneously by multiple users, which prevents any possibility of several people updating the same record at the same time. The NOUPDATE option tells SPIRES to allow you to reference the record even if someone else has it referenced, because it is not your intention to update it. Thus, the NOUPDATE option allows you to override the locking set by the existence of secure-switch 10 when you do not want to update the record.

RESTRICTED is similar to NOUPDATE but additionally blocks any partial processing of the record. Only the FOR * command may be used for processing the record. [See 12.6.] RESTRICTED is useful for core management, in that SPIRES does not reserve the space (twice the amount specified by SUPERMAX) it would set aside for record expansion during partial processing.

Both NOUPDATE and RESTRICTED may be followed by the FILTERED option, which applies any filters in effect to the record as it is built in core. [See 21.1.] Thus, the internal memory used for the record can be considerably smaller than might be required for the entire record.

Be aware SPIRES "builds" the referenced record internally just as it builds a new record being added. If any required elements have been filtered out when the FILTERED option is invoked, an S419 error ("A required data element value was not input for the record being built.") will result.

$GETxVAL functions with REFERENCE

Once a record is brought into main memory, the $GETUVAL, $GETCVAL, $GETXVAL and $GETIVAL functions can be used to access the unconverted or converted values of any occurrence of any element in the record.

See "SPIRES Protocols", or use the EXPLAIN command, for more information about these functions.

12.1  Introduction; the REFERENCE Command

While it is possible with whole record modification to add or remove partial data from records, the whole record must be "unpackaged" and presented to the user and then entirely "repackaged" after the modifications are made. This is costly, particularly if only a small part of the overall record is involved. Furthermore, CRTs and other fixed-dimension devices do not have the text editor's practically unlimited capacity; hence, the capability must exist for the presentation and manipulation of records in a piecemeal, or partial fashion.

Global FOR allows linear traversal of a set of records by means of commands such as SKIP, DISPLAY, REMOVE, etc. The set of added records (FOR ADDS) is distinctly different from the set of updated records (FOR UPDATES). Each set of records could be thought of as multiple occurrences of a structural element, and those occurrences constitute a single level of control called the record level. But other levels exist due to the hierarchial nature of records. Structural elements within records form subtrees, and the REFERENCE command provides access to these subtrees by means of "Partial FOR" commands. Such accessing is called "partial record processing" or simply (and more commonly) "partial processing".

At the record level, the following commands establish what is called a "referenced record":

The two forms of the REFERENCE command establish a referenced record by retrieving an existing goal record from the data base. (The second form is used in Global FOR processing.) If the NOUPDATE option is given, the UPDATE command (shown below) is blocked, and secure-switch 10 record locking is not performed. RESTRICTED is the same as NOUPDATE, except that in addition, partial processing commands are blocked; only FOR * processing is allowed under RESTRICTED. [See 12.6.]

The GENERATE REFERENCE command establishes an empty referenced record.

At the record level, a referenced record may be returned to the data base (updating the deferred queue) by the following commands:

The key-value in the UPDATE command is not needed if the referenced record was established by a REFERENCE command, and the key in the record being updated is the same as the original key.

The CLEAR option on UPDATE or ADD may be either CLEar or CLR. If CLEAR is not specified, the referenced record is retained. This allows the user to do further modification of the referenced record, including its key, so that another ADD or UPDATE can be done using the same basic referenced record. When CLEAR is specified, and the UPDATE or ADD command succeeds, the referenced record is released.

A referenced record may be released at any time by the command:

12.2  Record Navigation

Once a referenced record is established, Partial FOR commands may refer to data elements, beginning with those defined at the record level; and once Partial FOR has chosen an element, other partial processing commands can be used to manipulate the chosen element. One of those partial processing commands is another form of the REFERENCE command, and if Partial FOR has chosen a structural element, the referencing of an existing occurrence of that structure causes Partial FOR to then refer to the data elements defined for that structure. Therefore, the structural hierarchy of a referenced record may be traversed by nested pairings of Partial FOR against a structural element, and REFERENCE of an existing occurrence of that structure.

The general form of the Partial FOR command is:

The "FOR element" command provides the basic mechanism for all other partial processing. It specifies either a structural or simple data element, and the optional WHERE clause can specify criteria to be applied in locating occurrences of that element. If the element is a simple data element, the WHERE clause may only refer to that element. Therefore, the special form

may be used as a shorthand for

Note that for a structural element, the WHERE clause may refer to any element contained at any level within the structure.

Partial FOR may only refer to a particular set of data elements, those at the last referenced structural level. If no Partial FOR commands are in effect for a referenced record, Partial FOR may refer to only record level elements. The SHOW REFERENCE ELEMENTS command lists all the primary element names that may be specified in a subsequent Partial FOR command. The form of the command is:

As stated earlier, it is possible to move down several levels in the structural hierarchy of a referenced record by nested pairings of Partial FOR and REFERENCE against structural elements.

The ENDFOR command is used to back up one or more levels. When used with Partial FOR, the general form of this command is:

ENDFOR cancels Partial FOR commands, and if no element name is given, the Partial FOR associated with the current level is cancelled (moving back up one level). Otherwise, the specified element name must correspond to one of the active Partial FOR commands, and all levels up through that Partial FOR are cancelled.

A condition known as a "referenced state" exists immediately following either the ENDFOR of a Partial FOR, or the REFERENCE of an existing occurrence of a structural element specified by Partial FOR, or the establishment of a referenced record by the record-level REFERENCE or GENERATE REFERENCE command. When a referenced state exists, Partial FOR commands may only refer to data elements at the next level in the hierarchy. Otherwise, Partial FOR commands may replace themselves at the current level since they may only refer to elements at that level. Therefore, a referenced state shifts by one level the allowed set of data elements that Partial FOR may specify. A Partial FOR command issued when a referenced state exists moves to the next level in the hierarchy; but when a referenced state does not exist, Partial FOR commands remain at the current level.

12.3  Partial Processing Commands

These commands provide manipulative control over the element named in the preceding FOR command:

The optional "occurrence" or "range" specifies which occurrence (or occurrences) of an existing FOR element should be processed. If the FOR command specified a WHERE clause, then only occurrences which meet the criteria are considered for processing, although every existing occurrence in any range is examined. Any WHERE clause is ignored for the ADD commands.

The allowed occurrence or range specifications are:

When a "FOR element" command is issued, the following conditions exist: the NEXT occurrence is the FIRST; no occurrences have been processed, so the * specification is not allowed; and a referenced state does not exist. The FOR ** command reinstates these conditions. It cancels any referenced state, and sets NEXT equal to FIRST. On the other hand, the FOR * command retains any established referenced state, and does not reset NEXT back to FIRST. It only eliminates any previous WHERE clause, and establishes a new WHERE clause if one is given with the command. FOR * allows you to change WHERE clauses in the middle of processing.

For most partial processing commands, if no occurrence or range is specified, NEXT is assumed. The * occurrence or range specification is not allowed following any FOR command until some other occurrence specification has been given (including the default NEXT). It also cannot be used if the last occurrence processed was a removed occurrence (REMOVE, MERGE, and UPDATE can do "removal").

The optional END clause provides command processing whenever no occurrence can be found to process. The general form of the END clause is:

where "command" is either a single command verb like RETURN or is an enclosed string like 'JUMP LABEL'.

Whenever an occurrence cannot be found, an END condition is signaled. Such a condition is cleared by an END clause, by an ENDFOR command if in XEQ mode, or else automatically. If an END condition is not cleared, no other partial processing or FOR commands are accepted until an ENDFOR command is processed.

The partial processing commands operate as follows:

When there is no referenced record, the UPDATE command may also include the WITH DATA option, to specify that the data to be used is included on the UPDATE command. [See 25.]

When there is no referenced record, the MERGE command may also include the WITH DATA option, to specify that the data to be used is included on the MERGE command. [See 25.]

When there is no referenced record, the ADD command may also include the WITH DATA option, to specify that the data to be used is included on the ADD command. [See 25.]

12.4  Partial Processing UPDATE and MERGE Capabilities

Both UPDATE and MERGE have unique capabilities in Partial Processing. The first thing these commands do is input one or more occurrences of the Partial FOR element, possibly via a FRAME-TYPE=STRUCTURE frame with proper USAGE and DIRECTION. These input occurrences usually match the range of occurrences processed by these commands on a one-for-one basis. The first occurrence found to process (according to any WHERE clause criteria) is occurrence number one, the second is occurrence number two, etc. However, it is possible to number the input occurrences such that they don't always match with the processed occurrences. Consider the following:

A sample record might look like the following:

All elements are shown with an occurrence number in parenthesis. That was done by declaring PRIV-TAG numbers on each element, and then assigning those numbers to either CONSTRAINT or NOUPDATE. To conserve space, multiple occurrences of the simple elements were placed on the same line. The values of the simple elements show the complete occurrence path leading to them. Thus, the value B1212 represents the occurrence of B given by the path: S(1); T(2); U(1); B(2);.

Now consider the following sequence of commands:

The output by TRANSFER might look like the following:

If everything in T is updateable, then the UPDATE command would "replace" these same two occurrences by whatever is input as the replacement. But consider the following input:

The occurrence numbers shown in the input don't match with those of the occurrences to be processed. T(1) does not occur in the input, so the first occurrence processed will be "removed". T(2) refers to the second occurrence, so that occurrence will be "replaced". T(3) now comes along, and there are no more occurrences to process since we are only updating two (the same ones found by TRANSFER ALL). This and any other extra input occurrences are considered an "addition" to be inserted immediately following the last occurrence processed. Following the UPDATE, if we requested TRANSFER ALL again, we'd get:

On the surface, it seems as though the UPDATE caused all the occurrences to simply be renumbered. But that would not necessarily always be the case. If there had been three or more occurrences of the T structure, and a WHERE clause on the FOR T command chose non-adjacent occurrences to process, then "removal" of the first and "replacement" of the second could cause a non-processed occurrence to fall between the two processed occurrences. Here is an example, where the original record contains:

to be processed by:

with input of:

which would result in the following:

This example serves to illustrate a couple of points. First, notice that none of the final occurrences meet the original WHERE clause criteria. Second, close examination of the result shows that the original T(1) is gone, and that what was the old T(2) is now T(1). The old T(3) is also gone, with the new T(2) taking its place. The new T(3) is an "addition", followed by the old T(4).

In all of the examples thus far there has been an assumption that "everything in T is updateable". But what would happen if an occurrence of T contained non-updateable or invisible elements? The process of "removal" would not eliminate such an occurrence, but would only remove all updateable information leaving just the non-updateable of invisible portion. Likewise, a "replacement" operation would merge the non-updateable or invisible portion of the old occurrence into the input replacement. All non-updateable material is dropped from the input data before the input is used for either "replacement" or "addition". The basic rule for UPDATE is that all updateable material in the original occurrences is discarded, and the input supplies new updateable material, either as a "replacement" or "addition".

For both UPDATE and MERGE processing, if input occurrences of the current Partial FOR element are not numbered, they are simply considered to be sequentially numbered from 1 on up. Thus, the following are equivalent inputs:

Warning: Do not mix numbered and unnumbered occurrences of the Partial FOR element in a single input. In the example above, T was either always numbered or unnumbered. The same warning applies to multiple occurrences of each element that occurs within any particular structural occurrence. In the example above, it would be improper to have something like D(2) = <3>; D = <4>; within the second occurrence of T. Processing results are unpredictable when such mixtures occur.

It's important to realize that the processed occurrences are always considered to be numbered beginning with 1, regardless of where the range begins within the set of actual occurrences. For example, if there were ten actual occurrences of T, and six of them met WHERE clause criteria, then:

Although each range of occurrences begins at a different place within the set of all occurrences, the first occurrence processed by a range is always numbered 1, the second is numbered 2, etc. This rule applies to all commands that specify a Range to process: DISPLAY, TRANSFER, UPDATE, MERGE, and REMOVE. If the input to UPDATE or MERGE has occurrences that fall beyond the process range, then those occurrences are considered "additions" to be place immediately following the last processed occurrence of the range. All other input corresponds to occurrences within the process range. If an occurrence in the process range doesn't have corresponding input, the UPDATE command recognizes that as a signal to do "removal" processing against that occurrence, while the MERGE command simply "skips" that occurrence leaving it unchanged.

The basic rule for MERGE is that updateable material in the original occurrences is retained, unless the input supplies new updateable material, either as a "replacement" or "addition". MERGE can also indicate "removal" by specifying a negative occurrence number in the input. Consider the first example given for UPDATE, but this time for MERGE:

The output by TRANSFER might look like the following:

Again assume that everything in T is updateable, and that the input for the MERGE command was the same as shown for UPDATE:

The result would be:

The first occurrence of T was left unchanged. Within the second occurrence, selective values of B and D were replaced or added. T(3) was an "addition" just like in the UPDATE example.

Under MERGE processing, element(-n); does "removal" processing for the n-th occurrence of the specified element. If the input had specified things like B(-2); or D(-1); then the selected occurrence (positive equivalent) would be removed. That would even be true for the Partial FOR element itself; thus if the input had been only T(-1); the result of a command like MERGE LAST would be the same as if REMOVE LAST had been done.

Finally, the * Occurrence or Range specification is invalid following an UPDATE or MERGE if the last occurrence processed was totally removed and no "additions" occurred. If "additions" occur, the last one added defines the * position. Otherwise, the last processed occurrence (not removed) defines the * position.

12.5  The SHOW LEVELS Command

The following command provides useful information concerning partial processing:

The SHOW LEVELS command shows the Processed and Examined counts for Global FOR (if applicable) and each nested Partial FOR. Each Partial FOR also shows the corresponding FOR element name.

The Processed and Examined counts vary according to the following rules. The "FOR *" command sets the Processed count to zero, but leaves the Examined count unchanged. The "FOR element" and "FOR **" commands both set the Processed and Examined counts to zero. Partial Processing commands which specify an Occurrence or Range specification of FIRST, LAST, or ALL also set the Processed and Examined counts to zero, after which they proceed to find at least one occurrence starting from the FIRST position. As a general rule, any command which sets both the Processed and Examined counts to zero causes existing occurrences to be "renumbered". An occurrence or range specification of NEXT, "n", REST, or REMAINING begins with the Processed and Examined counts at their current values, and then they proceed to find at least one occurrence starting from the NEXT position. FIRST, LAST, NEXT, and "n"=1 represent a single occurrence. The * specification, if valid, also represents the single occurrence at the "current" position.

The number of occurrences actually processed by a single Partial Processing command is called the "process range". The number of occurrences examined to satisfy any particular process range is the corresponding "examined range". The Processed and Examined counts are altered by adding the corresponding range amount following the completion of a command. The ADD command does not alter the Processed count, and only occurrences added by ADD BEFORE increment the Examined count. Also, UPDATE and MERGE can have extra "additions" inserted following the last processed occurrence. The total number of such "additions" increments the Examined count. UPDATE, MERGE, and REMOVE can do total "removal" of occurrences. The total number of such "removals" decrements the Examined count. When * is used as an occurrence or range specification, the Processed and Examined counts are not varied unless an UPDATE, MERGE, or REMOVE causes total "removal" of that occurrence, in which case the Examined count is decremented by one, and * becomes an invalid specification.

The two system variables $PXCOUNT and $PPCOUNT contain the count of elements examined and processed respectively at the current Partial FOR level.

12.6  The FOR * command

When a referenced record has been established, and no Partial FOR commands are currently active, the FOR * command may be used to establish a special mode of processing. The form of this command is just:

There is no WHERE clause allowed. When this special mode is established, DISPLAY, TRANSFER, and MERGE commands may be used to process the entire referenced record, not as individual elements, but as a single unit. All the elements of the record can be accessed at once. Since the full record is processed each time, the DISPLAY, TRANSFER, or MERGE commands do not require a Range specification, and an END clause is meaningless. Formatted output and input can be done using record-level frames (FRAME-TYPE = DATA;).

ENDFOR (or ENDFOR *) terminates this special FOR * mode.

12.7  Partial Processing to the Rescue

Occasionally a situation arises where a set of records are to be found and processed with Global-FOR WHERE criteria such that the conditions of the WHERE clause cannot be guaranteed. For example, using the sample record described in section 12.7, find the records which have "D STRING 2" and "B STRING 3" in the same occurrence of the T structure. D belongs to the T structure, but B belongs to the U structure which in turn belongs to T. They are not in "exactly the same structure". Yet it is possible some occurrence of U within a particular T could have the required B value, and D in that same T could have its required value. It would not be possible to use the @-sign to indicate "same structure processing" since B and D are in different structures. But the Global-FOR WHERE clause could at least specify that the records chosen should have both conditions satisfied, although possible from different occurrences of the T structure.

This retrieves a candidate record. The first thing that might come to your mind is: "This requires retrieving the record, isn't that expensive?". Not really, because the record needs to be retrieved to check out the WHERE clause criteria, and doing a REFERENCE of that records doesn't retrieve it again. Next you might ask: "What good does it do to REFERENCE the record?". The answer is that partial processing can now determine if this record meets the true criteria, B and D conditions being met in the same occurrence of T. A protocol to do this might look like the following:

The difference here is that the WHERE clause on FOR T is restricted to examining the B and D criteria within each occurrence of T, not across all occurrences. So Partial-FOR WHERE clauses exibit a "same occurrence" property for criteria that occur at different levels within a particular structural occurrence of the FOR element. Of course, "same structure" processing (indicate by @-sign) could still be done for elements which are within a single structure, such as B and @C or F and @G, or even multiple occurrences of a single element within a structure, such as H and @H, D and @D, or B and @B.

As a final example, assume a goal record with the following structure:

Suppose the processing requirement is to purge all FUND structures from the subfile where FUND-ID=8 and FUND-YR < 1980 are satisfied in the same occurrence of the structure. As an additional requirement, the AMOUNT of the donation must be less than 10000. The request could be done with the following protocol:

12.8  Using the INCLOSE Command to Close-Out a Referenced Record

A frequent partial processing situation involves multiple merges into the referenced record where it would be useful to have SPIRES execute various Inclose tasks as the record's processing continues, without having to send the record to the deferred queue. For instance, you might want to retrieve element values that are determined from other elements and are thus not computed till record closeout. Or, for another example, you might need to sort the occurrences of an element for display.

The INCLOSE command provides a handy way of getting SPIRES to execute Inclose rules for a record without committing the record to the subfile (though that is usually done later). All the Inclose rules are executed as if a subfile transaction were taking place, but the referenced record is neither added nor updated.

Once the INCLOSE activity is done, those elements whose values are set by Inclose rules will have values that can be retrieved through continuing Partial FOR processing.

The command's syntax is:

The options are valid only when you are working with structural occurrences, to close out the structure's contained elements. The range values are described elsewhere. [See 12.3.] The range is assumed to be NEXT if not specified. (See below for more information about INCLOSE with structures.)

Otherwise, INCLOSE (with no options) can be issued when a record is referenced, from the record-level, or under "FOR *" processing; it succeeds only when there have been changes to the data (elements added, updated or removed) since you began working with the record or since the last time you issued the INCLOSE command for this record.

A common sequence for working with INCLOSE is the following:

If you make no further changes to the record after the INCLOSE command and then issue a final ADD or UPDATE command to send the record to the deferred queue, no further Inclose processing will occur (with the exception noted below). The Inclose processing will not be repeated. So, for example, a "time-updated" element would contain the time the INCLOSE command was executed, not the time the record went into the deferred queue.

However, if you do make more changes to the record after an INCLOSE command, the ADD or UPDATE command will invoke the Inclose processing again. Thus the "time-updated" element would get a new value, because SPIRES would replace the old value with a new one. Note though that all the elements will already have values, so Inclose rules that provide values only if the element has none (e.g., a "date-added" element) will not get a new value.

Note: the processing rules for slot keys and augmented keys involve some Inclose processing that occurs only when the record is moved into the deferred queue. Hence the final form of the key may not yet be available after an INCLOSE command. If the goal record has a slot key, you will get a warning message to this effect when you issue the INCLOSE command. If it has an augmented key, the key will not be in its final form but will be in an intermediate form of little use to you.

INCLOSE with Structures

Under "FOR structure", INCLOSE performs closeout against the range of occurrences that are found. Of course, the "FOR structure" command can have a WHERE clause that picks certain structures based on content criteria.

If a serious error occurs during INCLOSE processing, no further processing is done. You can use $PRTCNT, $GPCOUNT and $GXCOUNT variables to determine when an error has occurred.

The INCLOSE command does not execute the Inclose rules of the structure itself, just those of the elements contained within the structure's occurrence. For example, if the structure itself has a $SORT Inclose rule, the INCLOSE command will not cause the sorting to be done; that won't happen till normal closeout of the whole record, or if INCLOSE is issued for the whole record or any structure containing this structure. Also, keep in mind that the sort won't happen at all unless the structure has been modified or referenced during referenced record processing.

13  I/O Monitoring Commands In SPIRES

SPIRES provides two monitoring facilities that allow a user or programmer to determine when ORVYL data sets (that is, the ORVYL files that comprise SPIRES files) are attached, and to collect statistics on the number of reads and writes to those files.

You will find the SHOW FILE COUNTS facility the easiest to use. It displays a table of read/write counts to the various data sets attached, and does not need to be "turned on" in order to collect the data (though it can be reset). It is available in SPIRES and, in the guise of the SET FILE COUNTS command, in SPIBILD as well. The newer of the two methods, it is described in the last section of this chapter. [See 13.5.]

The older monitoring facility, called "SINFO" for "SPIRES INFOrmation", is also available in both SPIRES and SPIBILD. The facility only gathers statistics when it is enabled. Its table of read/write counts is considerably more difficult to interpret than the SHOW FILE COUNTS display. On the other hand, when you SET SINFO, SPIRES also displays the the name of each ORVYL data set and its "device identifier" when it is attached. [See 13.1.]

13.1  The SET SINFO (SET SIN) Command and the "S" Parameter

The SET SINFO command is used to enable I/O statistics monitoring and reporting. Once the command is issued, the system will report the file name and device identifier (a number indicating an ordinal position in an internal ORVYL table) of every file or device as it is attached; also, the system will begin recording the number of reads and writes to those devices.

For example:

The display indicates that device 6 is REC1 of the system Formats file, device 7 is the deferred queue of the system Formats file, etc. No information is displayed to indicate when a device is detached; however, subsequent attaches will reuse the device identifier freed by an earlier detach.

The "S" parameter is available on the SPIRES and the SPIBILD commands to SET SINFO internally before any files are attached by the processor.

13.2  The SET NOSINFO (SET NOSIN) Command

The SET NOSINFO command is issued to disable I/O monitoring and statistics gathering. File attaches are no longer displayed, and read and write statistics are no longer updated; statistics already gathered are not affected.

13.3  The SHOW SINFO (SHO SIN) Command

The SHOW SINFO command displays counters indicating the number of reads and writes to each attached device while SINFO is enabled.

The CLEAR option resets the counters to zero after the display, as if you had issued a SHOW SINFO / CLEAR SINFO combination. [See 13.4.]

The command, available in SPIRES or SPIBILD, currently displays two sets of 24 counters each. The first set shows the number of reads for devices 1 through 24; the second set shows the number of writes for devices 1 through 24. The counters are displayed on three rows as follows: the first row represents read counts for devices 1 through 16, the first half of the second row read counts for devices 17 through 24; the third row represents write counts for devices 1 through 16, and the second half of the second row represents write counts for devices 17 through 24.

The drawings below show how the devices are represented for reads and writes on the three rows of the SHOW SINFO display.

    1 - R -- 8    9 - R - 16
1. -----------   -----------

   17 - R - 24   25 - R - 32
2. -----------   -----------
                 17 - W - 24

    1 - W -- 8    9 - W - 16
3. -----------   -----------
   25 - W - 32

For example:

This display indicates that there were two reads of device 6 (REC1 of the system Formats file), 1 read of device 7 (the deferred queue of the system Formats file), and 1 read of device 8 (the residual of the system Formats file), to execute the user's command to SET FORMAT.

Note that a single counter may indicate I/O's to more than one file: within the span of a single command's execution, one device identifier may be used for more than one file if files are attached and detached during a command; the display of attaches and detaches from the SET SINFO command will indicate this situation.

Device 1 represents the terminal itself, for which reads and writes are not monitored; thus the value of the first counter in each set will always be zero.

If more than 24 devices are used for I/O, then the 25th and higher-numbered devices "overflow" their counters into counters used for other devices. For devices 25 through 32, the second half of the second line is incremented by reads, and the first half of the third line is incremented by writes.

The display from the SHOW SINFO command is automatically presented in a SPIBILD session for which SINFO is invoked; the display is presented after each SPIBILD command that does file processing (e.g., PROCESS, BATCH, etc.).

The ORVYL command SHOW FILES ATTACHED can be helpful when you are trying to figure out which file is associated with each device identifier. It shows all the ORVYL data sets that are currently attached, each with its device identifier number.

13.4  The CLEAR SINFO (CLE SIN) Command

The CLEAR SINFO command resets all read and write counters to zero; monitoring remains enabled.

13.5  The SHOW FILE COUNTS and SET FILE COUNTS Command

You can get much of the same information from the SHOW SINFO display in an easier to read form, using the SHOW FILE COUNTS facility, which includes the SHOW FILE COUNTS, CLEAR FILE COUNTS and (in SPIBILD) SET FILE COUNTS commands. Even better, SHOW FILE COUNTS breaks down the I/O by record-types, not just by data sets.

In SPIRES, issue the SHOW FILE COUNTS command to see the number of reads and writes from and to the attached ORVYL data sets and specific record-types of the attached SPIRES files:

The options are:

The example below will demonstrate the ALL display.

You can reset the file counts by issuing the CLEAR FILE COUNTS command:

which has no options.

File Count Displays in SPIBILD

The SHOW FILE COUNTS command cannot be issued in SPIBILD; instead, you issue the SET FILE COUNTS command before a SPIBILD processing command (such as INPUT BATCH or INPUT MERGE):

The SET FILE COUNTS command has no options -- it is similar to SHOW FILE COUNTS ALL in SPIRES. The file counts display for each file being used appears as it is detached; the display for system files appears as you exit SPIBILD. Once set in SPIBILD, the file counts facility cannot be turned off except by exiting SPIBILD. [EXIT QUIET in SPIBILD will not suppress the file counts display.]

There is no CLEAR FILE COUNTS command in SPIBILD, so you cannot reset the counts except by exiting and calling SPIBILD again.

Here is an example showing the type of information shown by the SHOW FILE COUNTS and SET FILE COUNTS commands.

For each file involved, SPIRES displays the number of ORVYL blocks read and written since the primary file was attached (i.e., the file holding the subfile selected through the primary path), or since the CLEAR FILE COUNTS command was issued. Specifically, the read/write data is displayed for:

So the example above shows, for example, that SPIRES read five blocks of the GQ.JNK.ALMANAC file's deferred queue, and wrote two. Keep in mind that the numbers reflect the number of read/write operations, not the numbers of different blocks read or written. Since the counts are cumulative over the SPIRES session, the same block may be read or written multiple times, with each operation being counted.

For system files, SPIRES displays similar counts for those files, except that they are combined into one file display. That is, the display for "System files" shows counts as if they all came from a single large system file, rather than from individual files. For example, REC2 represents a record-type in the system's FILEDEF file that is read when users select a subfile, but it also represents a record-type in the system's FORMATS file where compiled formats code is stored.

Here is a list of the record-types you are likely to see in the system files section, along with the types of data being read from them:

13.6  The SHOW SUBFILE INFORMATION Command

The SHOW SUBFILE INFORMATION command provides useful information about the selected subfile. In particular, it tells you the name of the subfile, the name of the file containing the subfile, the name of the goal record-type (the RECORD-NAME element in the file definition) and the name of the format (FORMAT-ID in the format definition) of the set format (if any). People trying to define formats for a subfile who may not have the file definition will need this information. Though it is available through several SPIRES variables ($SELECT, $FILENAME, $GOALREC and $FORMAT), this command will make access to the information easier.

In addition, the SHOW SUBFILE INFORMATION command displays information about various paths that may have been set. [See 14.2.]

The complete syntax of the command is

Of course it should only be issued when a subfile is selected.

Below is an example of a session in which the command is issued. The information is shown in a titled display:

-? select records
-? show subfile information
               Information for Selected Subfile RECORDS

                     Goal Record Information
 Type     Subfile-Name            File-Name             Record  Format

 Primary  RECORDS                 GA.JNK.TRECORDINGS    DISC    ALBUM

RECORDS is a subfile of the file GA.JNK.TRECORDINGS. The goal record of the subfile is called DISC. The set format is ALBUM.

GOAL-TYPE indicates that the information on that line refers to the Primary subfile, i.e., the selected subfile. Additional lines may indicate subfiles and/or record-types that are linked to via subgoal processing in formats or phantom structures:

               Information for Selected Subfile PHANTOM TEST

                     Goal Record Information
 Type     Subfile-Name            File-Name             Record  Format

 Primary  PHANTOM TEST            GQ.JNK.ALMANAC        REC3
 Path  2  SELECTIONS              GQ.JNK.TRECORDINGS    XREC19  LONG
 Subgoal                          GQ.JNK.ALMANAC        ENTRY
 Subgoal                          GQ.JNK.TRECORDINGS    REC14

                        Path Information
  Path  Name              Type       Value

    2                     Subfile    SELECTIONS

The subfile PHANTOM TEST is the primary subfile; SELECTIONS is selected through path 2, an alternate subfile path, as indicated in the path information at the end of the display. Two record-types are accessed via subgoal (or perhaps phantom structures): ENTRY from the almanac file and REC14 from the TRECORDINGS file.

Here is a set of commands that might have preceded the SHOW SUBFILE INFORMATION display that appears above:

Although the SET FORMAT command sets a format that will use subgoal processing, the actual link to the subgoal does not occur till the frame that calls the subgoal is executed. Hence, if the SHOW SUBFILE INFORMATION command had been issued prior to the DISPLAY command, the TRECORDINGS -- REC14 subgoal would not have appeared.

14  Path Processing

Path processing lets you simultaneously have several subfiles selected, and/or several formats set for a subfile, allowing you to access data through each of these paths as if it were the only subfile or format in effect. Thus, you can have several different views of your SPIRES "world" at the same time. This capability may dramatically change not only the way people use SPIRES in the everyday interactive mode but also the way applications networks are constructed in SPIRES.

There are some restrictions on this feature that are discussed below; only subsets of the SPIRES commands are allowed through various types of paths.

14.1  The Primary Path

In all path processing, there is a primary path to which all other paths are connected. The primary path is established when you select a subfile. The data in the attached file is seen through the view established by the subfile and the restrictions established by the file definition. There is a "goal record-type" established which determines the set of records in the file that you can retrieve, display and update. There is also a format established here for viewing the data -- either the SPIRES default format or a customized output format.

Once the primary path is established, of course, many SPIRES commands become available for processing records. You can add, display or update records, search through indexes, sequentially scan with Global FOR, or look at informational displays concerning the data, such as SHOW INDEXES.

Previously you have been locked into this view of the data in that you lost this view if you wanted to see data from another subfile or through a different format. Subgoal processing in formats helped relieve this problem somewhat, but not without a lot of premeditation.

14.2  Establishing Alternate Paths

Once a primary path is established by issuing a SELECT command, you can open up other paths that allow you to either select other subfiles or set other formats, with neither of these actions eliminating your primary view.

A single command prefix controls most of the path processing. It has the following form:

THROUGH can be abbreviated down to 3 characters; THRU is also allowed as a variation. The term following THROUGH gives a path value that either establishes a new path or refers to a path already established. Up to 32 paths can exist simultaneously, if there are enough system resources to support them. Each path is associated with a number (1 to 32) that you may use when issuing commands through that path. You may choose to have SPIRES assign a path number by using the NEXT option. Alternatively, you may assign an unused path number (1 to 32) or specify a name, in which case SPIRES will also assign a path number. Subsequently you use the paths by including either the name or the number on the THROUGH prefix when issuing a command.

Each time a path is opened or referenced by a command using the THROUGH prefix, the system variable $PATHNUM is set to the number of the path referred to. This allows a protocol to easily establish and clear a path without affecting other paths that may be set. For example, a THROUGH NEXT SELECT FILEDEF command could be issued, followed later by the command "/THROUGH $PATHNUM CLEAR SELECT" to clear the path.

14.3  Alternate Subfile Paths

To establish an alternate subfile path:

where "path-name" is either a name, a number or NEXT, as discussed above.

Two alternate forms, though not "subfile" commands in the strict sense of the word, have similar effects, creating what we will call subfile paths anyway:

The ATTACH command lets you "select" record-types of files to act as if they were goal record-types of a selected subfile; EXPLAIN ATTACH COMMAND for more information about it.

The command SET SUBGOAL is similar to ATTACH, except that you can specify only a record-type of the same file housing the subfile selected on the primary path. You cannot do this unless you have SEE-level access to the file (as with ATTACH) or the primary subfile has given you subgoal access to the record-type you name (via the SUBGOAL statement in the file definition).

The function of these commands are to select additional subfiles when a primary subfile has already been selected. There are several reasons for doing this:

Here is an example, showing the commands of a user who wants to change a format definition while examining records using the format:

Multiple subgoals can be in effect at a single time. They may be combinations of subgoals established by the SUBGOAL statement in the SUBFILE section of the file definition, subgoals established through formats processing, and alternate subfiles selected by this command.

When this command is issued, automatic format selection occurs if there is a default custom format set for the subfile. Currently, subfiles with logging or charging can be selected through a path but only the select and detach of the file will be recorded in the log of the secondary file. Also, subfiles with SELECT-COMMANDs defined may not be selected through a path.

To eliminate the alternate subfile, issue one of the following commands:

where "path-name" indicates either the name or the number of the path establiwhed by the "THROUGH path-name SELECT" command.

Commands Available in Alternate Subfile Paths

As mentioned before, only a subset of the commands available through the primary path can be issued through an alternate subfile path. Multiple record commands and searching commands -- whether through a FIND command or Global FOR -- are not allowed. However, you can do those things through the alternate path after issuing the SET PATH command. [See 14.6.]

Currently you can use the following SPIRES commands through an alternate subfile path:

The following Global FOR commands are available through a path:

The following partial processing commands are allowed through a path:

For IF ... THEN, and subsequent THEN or ELSE commands, or any other commands that themselves include subcommands (such as RETURN in a protocol, or a DISPLAY command with an "END=command" clause in Global FOR), any THROUGH prefix is cancelled just before the subcommand is executed. That means that if you want the THROUGH prefix to be in effect for the execution of the subcommand, you must repeat it at the start of the subcommand.

For example, if you have SUBFILE1 selected as the primary subfile and SUBFILE2 is selected through path 2:

This at first glance seems preposterous, but on second glance, you'll recall that the THROUGH 2 prefix is cancelled prior to the execution of the subcommand in the THEN clause. In other words, SPIRES interprets the command like this: if the subfile selected in path 2 is SUBFILE2, then show me the subfile selected on the primary path. If you continued with this:

However, be careful about continuing with THEN or ELSE statements and paths. For example, if you now tried this, very similar to the last example:

Again, the THROUGH prefix is cancelled just before the execution of the subcommand, right after the THEN.

One last note on this subject: you cannot use the THROUGH prefix on an IF... THEN command if the THEN clause starts a block construct (e.g., with BEGINBLOCK, REPEAT UNTIL..., etc.).

14.4  Alternate Format Paths

To establish an alternate format path for the primary subfile:

where "path-name" (a name, number or NEXT) is a new name that does not correspond to an existing path.

Once you establish a path of this type, you can issue almost any commands through this path that can be issued through the primary path, since the primary goal record set is still in view through a format path. (SET ELEMENTS, DEFINE TABLE and SET INPUT FORMAT are not allowed.) Multiple record commands (like TYPE and Global FOR commands) are allowed since the only difference is that a new or different format is used.

To clear an alternate format path, issue one of these commands:

If the selected subfile of the primary path has a customized format automatically set, you might want to establish a secondary path with the standard SPIRES format. To establish a "default format path" you issue this command:

where "path-name" (a name, number or NEXT) designates a new path. Any record-access or update commands issued through this path will use the standard SPIRES format. You may not issue a SET FORMAT command through this path later.

To clear a default format path, you can use either of the commands shown above for clearing a format path:

Note then that the "THROUGH path-name CLEAR FORMAT" command can either establish or clear a path:

14.5  Alternate VGROUP Path

Sometimes you may need to eliminate interference between Static variable groups (VGROUPS). Perhaps a format is using a global vgroup, and you want to establish an alternate format that uses the same vgroup. A convenient way to do this is to first establish an alternate vgroup path:

where "path-name" is either NEXT or a new name or number. Next you may issue the command "THROUGH path-name SET FORMAT name", specifying the established path. That format can then be used to access records in the primary subfile with no interference from other vgroups.

The alternate vgroup path can be cleared by issuing the command:

14.6  Establishing a Subfile Path as the Default Subfile

With the SET PATH command, you can set one of your subfile paths as the "default" -- all subsequent commands that are not prefixed by a THROUGH prefix will be applied to the default subfile path, not the primary subfile.

The "path-name" or "path-number" is the name or number of the path you want to establish as the default path. It must be the name or number of a subfile path, not any other kind of path. DEFAULT is a "noise word" in the command in this form and has no effect -- if it helps you remember what the command does, you are welcome to use it. (It does have a disadvantage, described below.)

When you establish a secondary subfile as a temporary default this way, any commands that would normally be directed to the primary select will assume the default path instead. More importantly, searching, multiple record processing, and record updating under Global and Partial FOR processing are allowed against the new default subfile.

To return control to the primary path, issue the command:

For the purposes of this command and the $PATHCUR variable, "0" (zero) is the primary path number. Note that SET DEFAULT PATH is not allowed by itself.

SPIRES retains results and Global FOR information for the primary while another path is used as the default.

If you switch to a different subfile path with a new SET PATH command, then any result, stack and Global FOR conditions for the previous default path are cleared. However, you can retain them by using the SET NEXT PATH command, described below.

When you have set a temporary default path, you may still use the THROUGH prefix to access records in the other established paths. Additionally, you may issue THRU commands that would create more paths, such as THROUGH NEXT SELECT FILEDEF.

SET ELEMENTS is not available in a default path.

The SET NEXT PATH and SET PREVIOUS PATH Commands

As mentioned above, you can retain results, stacks and Global FOR conditions as you move from one default path to another if you switch to the next default path with the SET NEXT PATH command:

The "path-name" or "path-number" is the name or number of the path you want to establish as the new default path. It must be the name or number of a subfile path that is not already a "next path".

When you use the SET NEXT PATH command, SPIRES stacks the information it needs to retain for the old default path onto the information it retains for the primary. If you then issue another SET NEXT PATH command for another subfile path, the stack of retained information grows bigger.

To return back to the previous default path, you can issue the SET PATH or SET DEFAULT PATH command with the previous path's name or number, or more conveniently, you can issue the SET PREVIOUS PATH command (it has no options) to go back one level in the stack.

Before issuing the SET PREVIOUS PATH command, be aware that you will lose the current default path's result, stack and Global FOR conditions -- that's because you will be "unstacking" the retained information as you go backward. In other words, SET NEXT PATH and SET PREVIOUS PATH are for use in "nesting" situations: when you SET NEXT PATH, you are nesting deeper and deeper, not truly switching from one path to another. When you back out, you lose the conditions of the path you are leaving.

SET PREVIOUS PATH will back you out to the previous path; you can back out multiple paths at once to a specific path by naming it in the SET PATH or SET DEFAULT PATH command -- the retained information for all paths in between will also be discarded. For example, if you set path 1 as the default, then set next path on path 2 and then set next path on path 3, you can return to path 1 as the default with SET PATH 1; but you will lose any results, etc., on paths 2 and 3. You can eliminate all retained information for all default paths by returning to the primary with SET PRIMARY PATH.

The system variable $PATHCUR, which provides the number of the current path, in conjunction with the $PATHINFO and $PATHFIND functions, can be very helpful in helping you keep track of which path you are in when you have multiple default paths. [See 14.8.]

14.7  Clearing Paths

To clear all paths, simply issue the CLEAR SELECT command or select a new primary subfile. In the future there may be ways to keep a path open even when the primary subfile is cleared, but currently there is not.

You can clear an individual path that is set (in other words, to clear a single path, you must make it the current path first) by issuing the CLEAR PATH command:

This clears the path and returns you to the primary path (not the default path, if set).

You can reinitialize the state of all established paths without having to reselect the primary subfile and re-establish all the paths. This is done with the CLEAR PATH ENVIRONMENT command:

This command does the following to the primary and all other alternate subfile paths:

This command does not do any of the following:

14.8  Obtaining Information on Paths

SHOW SUBFILE INFORMATION

To see a list of all subgoals and alternate subfiles currently in effect, as well as information concerning each path that has been established, issue the SHOW SUBFILE INFORMATION command. For example:

  -> select filedef
  -> thru newpath select formats
  -> show subfile information

                 Information for Selected Subfile FILEDEF

                       Goal Record Information
   Goal-Type  Subfile-Name        File-Name            Record  Format

   Primary    FILEDEF             $FILEDEF             REC1
   Path  1    FORMATS             $FORMATS             REC1

                          Path Information
    Path  Name              Type       Value

      1   NEWPATH           Subfile    FORMATS

$PATHFIND and $PATHINFO Functions

Another way to obtain similar information about paths is to use the $PATHFIND function or the $PATHINFO function, documented in the manual "SPIRES Protocols".

To oversimplify somewhat, $PATHINFO lets you determine the name of a path or its type (e.g., Subfile or Format) when you give the path's number, whereas $PATHFIND tells you a path's number (counting from 0 for the primary path) when you give the path's name or the name of the subfile on the path. The Protocols manual contains full details on these functions, but the examples below indicate briefly their use. (The same files are selected as in the example for SHOW SUBFILE INFORMATION above.)

  -> show eval $pathinfo(1,Type)
  Subfile                           <--Path 1 is of "Type" Subfile

  -> show eval $pathfind('FORMATS',Subfile)
  1                                 <--The FORMATS Subfile is selected
                                       on path number 1

The $PATHCUR Variable

$PATHCUR is an integer variable that contains the number of the current path in use during a SPIRES session. The value will be 0 if the session is currently on the primary path.

14.9  Examples of Path Processing

Below is a sample session using some of the path processing commands discussed above:

  -? -           Select the primary subfile
  -? select runner names
  -? -           Establish a format for the primary data
  -? set format runner.data
  -? -           Establish a path for default Spires output
  -? through next clear format
  -Path established: 1
  -? for tree
  +? skip 47
  +? -           Display a record through the format
  +? display
                ANGELL FIELD ANCIENTS RUNNER RESULTS

            Ronald D. McDonald               Fac/Staff Yes
            Birth date 11/10/32              Dept  Human Biology

      Events          1975-76        1976-77        1977-78        1978-79
      ------            TIMES          TIMES          TIMES          TIMES

      440 YARDS           103             77           66.5             66
      880 YARDS          2:27           2:40           2:32
      1 MILE             5:18           5:18           5:24           5:16
      2 MILE            11:31          11:22          11:42          11:14
      3 MILE            17:45          17:56          17:21          17:10
      5 or 6 MILE       30:32          37:39          39:39          35:16
      10 MILE           62:29          63:46          64:32          61:06
      MARATHON        3:01:50           3:01        3:02:47        2:58:32
      4 X 440                             67             66
      4 X 880            2:40                          2:31
  +? -           Display the record through the default format
  +? through 1 display *
   NAME = MCDONALD, RONALD D.         ;
   PTR1 = "McDonald, Ronald D.         00101";
   PTR3 = "McDonald, Ronald D.         00102";
   PTR5 = "McDonald, Ronald D.         00103";
   PTR6 = "McDonald, Ronald D.         00104";
   PTR7 = "McDonald, Ronald D.         00105";
  +? -           Establish an alternate subfile path
  +? through formats select formats
  -Path established: 2 FORMATS
  +? -           Set format for the alternate path
  +? through formats set format charnames
  +? -           Display a record through the alternate subfile
  +? through formats display gg.wck.runner.data
   ****** - GG.WCK.RUNNER.DATA
     GG.WCK.RUNNERS/REC4/RUNNER.DATA
  +? -           Establish an alternate vgroups path
  +? through next clear vgroups
  -Path established: 3
  +? -           Establish alternate format for the primary subfile
  +? through signup set format **signup
  -Path established: 4 SIGNUP
  +? -           Allocate a vgroup through this path
  +? thru 3 allocate orv.gg.spi.standard
  +? -           Show the allocated vgroups
  +? show allocated
  Static Group : GG.WCK.RUNVARS

  Static Group : GG.SPI.STANDARD

  Static Group : GG.WCK.LOCAL
  +? thru 3 show allocated
  Static Group : GG.SPI.STANDARD
  +? -           Show the current path and subgoal data
  +? show subfile information
                 Information for Selected Subfile RUNNER NAMES

                       Goal Record Information
    GOAL-TYPE  SUBFILE-NAME      FILE-NAME           RECORD   FORMAT

    Primary    RUNNER NAMES      GG.WCK.RUNNERS      REC4     RUNNER.DATA
    Path  2    FORMATS           GG.SPI.FORMATS      REC1     CHARNAMES
    Subgoal                      GG.WCK.RUNNERS      REC1
    Subgoal                      GG.WCK.RUNNERS      REC3
    Subgoal                      GG.WCK.RUNNERS      REC5
    Subgoal                      GG.WCK.RUNNERS      REC6

                          Path Information
    PATH  NAME              TYPE       VALUE

      1                     Format     Standard Format
      2   FORMATS           Subfile    FORMATS
      3                     Vgroup
      4   SIGNUP            Format     **ORV.GG.WCK.SIGNUP
  +? -
  +? - Now look at some error diagnostics
  +? -
  +? -           Attempt to establish a path that exists
  +? thru 1 select tag1
  -Invalid path: 1
  +? -           Select a subfile that is already selected
  +? thru next select formats
  -Subfile 'FORMATS' is already selected
  +? -           Try an invalid command through a subfile path
  +? thru 2 type
  -Command not allowed through path: 2 FORMATS
  +? -           Use a path that does not exist
  +? thru jack display
  -Invalid path: JACK
  +? -
  +? -           Now clear some paths
  +? -
  +? thru 1 clear path
  -Path cleared: 1
  +? thru signup clear format
  -Path cleared: 4 SIGNUP
  +? thru formats clear select
  -Path cleared: 2 FORMATS
  +? -
  +? -           Look at the data now
  +? show subfile information
                 Information for Selected Subfile RUNNER NAMES

                       Goal Record Information
    GOAL-TYPE  SUBFILE-NAME      FILE-NAME           RECORD   FORMAT

    Primary    RUNNER NAMES      GG.WCK.RUNNERS      REC4     RUNNER.DATA
    Subgoal                      GG.WCK.RUNNERS      REC1
    Subgoal                      GG.WCK.RUNNERS      REC3
    Subgoal                      GG.WCK.RUNNERS      REC5
    Subgoal                      GG.WCK.RUNNERS      REC6

                          Path Information
    PATH  NAME              TYPE       VALUE

      3                     Vgroup
  +? -           Clear out everything
  +? clear select
  -?

14.10  Simultaneous Transfers and References in Paths

It is possible for several different records to be transferred or referenced at once through different paths. The system variable $TRANSFER, which contains the internal form of the key value of the currently transferred or referenced record, has a different value on each path.

For example:

A maximum of 8 records may be juggled simultaneously that way, although the limit will be one lower when either of the following commands is in effect (two lower if they both are):

14.11  The CLEAR SUBGOALS (CLR SUBG) Command

Subgoal processing is a technique that lets you verify or retrieve data in other record-types while you have a given subfile selected. Records in either another subfile or another record-type of the same file as the selected subfile are accessible through subgoal processing.

Subgoal processing can be done in several ways:

When you use a subgoal, information about that subgoal path to the other record-type remains in internal memory until you CLEAR SELECT or select another subfile. In most situations, that is very useful -- if more subgoal processing is done, the path doesn't need to be re-established, which would involve the same overhead again.

In some situations, though, so many subgoals have been used that they help create a memory-management problem -- the upper limit is 40 subfile subgoals at a time. This problem is perhaps more likely to arise in complex applications where several subfiles are selected through path processing. [See 14.] To help control the problem, the CLEAR SUBGOALS command may be issued to eliminate subgoals established from the selected subfile. It may be preceded by the "THROUGH path" option to clear subgoals established through a given path.

You should not use the command to arbitrarily clear subgoals; use it in cases where you establish a subfile subgoal for one-time-only data retrieval and the continuing presence of the unneeded subgoal connection is causing memory problems. Definitely avoid using it if the subgoal path would be continually established and cleared.

If a subgoal is established through a given path (say, path 1) and then another path also uses it, it will still be tied to path 1. That means that the THROUGH 1 CLEAR SUBGOAL command would clear that subgoal; subsequent use of it through the other path would re-establish it, tying it to that path.

Non-subfile subgoals (that is, subgoals to other record-types of the file rather than to other subfiles) are eliminated by the CLEAR SUBGOALS command only if the command is issued for the path that first selected a subfile of that file.

15  Maintenance and Debugging Commands

These commands are generally restricted to those with MASTER privileges.

15.1  DUMP BLOCK Command

The DUMP BLOCK command dumps a hexadecimal and character view of a particular block within a specified dataset of the currently attached file into the active file.

Syntax

<number> is the block number to be dumped. It may be specified in decimal, or in hexadecimal with a leading 0. Thus block 15 and 0F are the same block.

<RECn> is a particular physical RECn dataset, from REC1 through RECF.

The CLEar (or CLR) option tells SPIRES to clear the active file before doing the dump. The CONtinue option tells SPIRES to append the dump information to the current active file.

The dumped information looks something like this:

   d10db0  0000    65030493  00020002  00000000  00000000   |...l............|
   d10dc0  0010    00270103  0493e7f0  f1402006  0503022e   |.....lX01 ......|
   d10dd0  0020    7980ffff  ffff0000  00000320  0000c7c7   |..............GG|
   d10de0  0030    4be2d7c9  00000001  c00001c0  0001d900   |.SPI..........R.|
   d10df0  0040    01400001  01002e00  01002a00  01010001   |. ..............|
   d10e00  0050    800022d4  818995a3  85958195  83854081   |...Maintenance a|
   d10e10  0060    958440c4  8582a487  87899587  40c39694   |nd Debugging Com|
   d10e20  0070    94819584  a2000420  06050300  04200605   |mands...........|
   d10e30  0080    03006900  01006520  06050300  01900005   |................|
   d10e40  0090    00010001  01005500  01005100  0120004c   |...............<|
   d10e50  00a0    00010048  e38885a2  85408396  94948195   |....These comman|
   d10e60  00b0    84a24081  99854087  85958599  819393a8   |ds are generally|
   d10e70  00c0    409985a2  a3998983  a3858440  a39640a3   | restricted to t|
   d10e80  00d0    8896a285  40a689a3  8840d4c1  e2e3c5d9   |hose with MASTER|
   d10e90  00e0    40979989  a5899385  8785a24b  00000000   | privileges.....|
   d10ea0  00f0 to 07ec      00000000                       |................|
   d115a0  07f0    00000000  a70800ec  90dc0010  65030493   |....x..........l|

The first column contains a range of physical memory addresses where the block was placed when it was dumped. This is usually of no value to you. The second column contains the relative location within the block of the first byte of information occurring on each line. The 3rd, 4th, 5th, and 6th columns are the hexadecimal representations of four consecutive bytes of data within the block. Sometimes a range of relative locations is shown with a single 4-byte value, which means this value is duplicated through the entire range. The last column, with surrounding vertical bars, is the character representation of the data within the block for this line of the dumped information. Unprintable characters are output as dots (periods).

15.2  FIX BLOCK Command

The FIX BLOCK command allows you to view, edit, and rewrite a particular block within a specified dataset of the currently attached file.

Syntax

<number> is the block number to be fixed. It may be specified in decimal, or in hexadecimal with a leading 0. Thus block 15 and 0F are the same block.

<RECn> is a particular physical RECn dataset, from REC1 through RECF.

Only those with MASTER mode capability can successfully issue this command. You must have the database attached (or selected). Once you issue the FIX BLOCK command, the requested block is located and brought into memory. If it didn't exist (S96 error), an empty block will be constructed for you.

The FIX BLOCK command then does a Pause operation placing you within the Emulator where you can use debugging commands to view (SC) or patch (PC) the record located by @10 (register 10). See the debugging information for details on this aspect of the process. [See 15.3.]

When you are done viewing or editing the block, issue the GO command. This returns control back to SPIRES, and you'll be prompted to WRITE the block. If you respond NO, the block is not altered. If you respond YES or OK, the block is written if you have write permissions to this dataset of the attached file.

15.3  DEBUGGING COMMANDS, EMULATOR

The Emulators come equipped with debugging aids. These are primarily available to the system coordinator (GG.SPI). However the Emulators can be called with a -d option which signals the desire to debug.

If you have compiled a debugging version of the Emulator (with gcc's -g option), then you can run the Emulator with "gdb", something like this (assumes "emg" Emulator):

% gdb emg

There is a globally addressable routine called "dbugtrap" in debugging Emulators that is called by a DBUG command. Once in gdb, you can set a breakpoint as follows:

> break emsvc.c:dbugtrap

With that breakpoint in place, you can then run the Emulator program:

> run [dash-parms] [target]

Both dash-parms and target are optional. The target is the program you intend to emulate, such as SPIRESH, PL360, ANALYZER, etc. Of course, you must run an Emulator that is compatible with the target. You can give the target in lower case, mixed case, or UPPER case. If you don't give a target, you'll be prompted for it upon entry. A very common dash-parm is -! to cause an immediate TRAP in the target.

Once the target program is running, an attention-interupt may cause gdb to take control. You can examine data with gdb commands, found using the "help" command. You can also examine the Emulator's 16 pseudo-registers, R0 thru R15, the program's starting point, or current execution point, like this:

Note that you can cause Emulator debugging to be activated by doing the following command:

Setting SEBflg turns on trapping for branches, which allows you to continue execution and trap on every branch. This can come in handy when the target program runs away, but gdb responds to the attention-interrupt, and you set SEBflg=1 and continue to trap the run away loop. Once you get an SEB trap, you can use the Emulator's debugging commands. Of course, if the loop is in the C-code (Emulator itself), then you need to use the dgb debugging commands, but having SETflg=1 may still be handy.

1) Dash-parms

The dash-parms are what the Emulator reads when it is entered. There are several, including these:

2) %%-commands

The following %%-commands can be issued from any command prompt, or from inside protocols, or from $ISSUECMD function (anywhere).

3) DEBUG COMMANDS

Most debugging commands can be issued wherever %%-comands are allowed, from a sprung TRAP, or system Pause. Most are available directly in SPIRES, such as: SA.

DBUG calls the "dbugtrap" routine. If you have a breakpoint there, you get a "gdb" trap, which allows you to set other breakpoints. This is very handy if you want to debug an emulated instruction in "em.c", or something in "emsvc.c", which you know is executed many times before you get the conditions set up that fail.

SEB -- Set Event Branch. This causes emulated code to trap each successful branch instruction. The trap is sprung AFTER the jump but BEFORE the execution of the instruction at the jump location. Therefore, the "X" command does not work because this is equivalent to a PostTrap caused by X. You must use some form of GO to continue.

CEV -- Clear Event Branch. Turns off SEB.

SF -- Show all four Floating-point registers.

SP -- Show the current Program Address (PSW -- Program Status Word).

STS -- Show all currently set traps. (See ST below)

CTS -- Clear all currently set traps. Does NOT affect SEB.

SA -- Show Address. This command takes an expression that can be any combination of the following items added or subtracted.

 1.  *   (the current Progrem Address)
 2.  Map_Symbol  (from PROGRAM.MAP file, such as: SPIRESH.MAP)
 3.  <decimal>
 4.  0<hexvalue>
 5.  $<hexvalue>
 6.  #<hexvalue>
 7.  @<register>

In all cases, additional values of types 3 and beyond may be added, subtracted, multiplied or divided to adjust the initial address. The symbols +-*/ indicate the operation. Evaluation is strictly left to right. <hexvalue> is a hex number. $0 is the first instruction of the loaded program. #0 is the first location in user-workspace. <register> must be a number from 0 thru 15, and @<register> indicates the address is found in the lower 24-bits of the specified <register>. Thus, @13 indicates register 13 contains the address. Map_Symbol and * may only start an expression.

Examples:

    sa 123
    sa 0FACE
    sa SEMANT+012-$0
    sa @3-@5
    sa 0-@5
    sa *+20

SC -- Show Core. Similar to SA, but the result must be an address that is in-bounds of either the program or workspace.

There are three general forms of SC:

    SC address
    SC address,length
    SC address_one.address_two

The first form is just a special case of the second, with 256 as the default length. Length may be specified as any expression (as in SA). The last form give two addresses, and signifies showing core from one to two. In all forms, the ending address must be in bounds.

If a resultant address expression is aligned on a 4-byte boundary, you may surround that address expression with an indirection operator which is specified as follows: >(addrexp) . This can be used by SA as well. The indirection operator may be nested.

Examples:

    sc >(@12+070),8  [Show 8 bytes using the address contained in @12+070]
    sc SEMANT        [Show 256 byte of the program from SEMANT]
    sc @8.>(@8)      [The first word of @8 contains the end address]

SG -- Show General Registers. There are three forms:

    SG
    SG n
    SG n,m

The first form show all 16 General Purpose Registers. The second show the specific register (n is 0 thru 15). The third form show the register range, inclusive. Both n and m must be integers from 0 thru 15. If n is bigger than m, then registers are displayed from n thru 15, and then from 0 thru m. So, for example:

   SG 14,1    shows registers 14,15,0,1

ST -- Set Trap. You specify an address, usually within the program. Each trap set is assigned a unique number. STS shows all traps.

  ST SEMANT
  ST VALTOBCD+4

CT -- Clear Trap. You specify the trap number.

4) TRAP STOP (OR PAUSE) ONLY COMMANDS

At a TRAP STOP or system Pause, the prompt is: >>

PG -- Patch General Register. The form is:

   PG n expression

where n is the General Register to patch (0 thru 15), and the expression is anything that is valid for the SA command.

PC -- Patch Core. This allow you to alter memory contents.

   PC address value_expression

The address must be in bounds of the program or user-workspace. The value_expression is either an 'apostrophie string' or a "quote string". Imbedded delimiters are permitted. The value_expression may also be a hexadecimal value. Leading zero (0) is NOT needed unless you actually want to start with 0.

Examples:

  PC SEMANT-4 'Don't try'
  PC @12 47F0

The old value is displayed in a manner similar to SC, and then the new value is displayed.

PP -- Patch Program Address. This allows you to alter the PSW.

  PP address

The address must be in bounds of the program. The address is any valid expression resulting in an address, similar to the SA expressions. You must be at a session break AFTER the instruction which caused the break, such as at an SEB break, or after doing the "X" command (see below) to get past the current trap. Successful PP commands show you the new Program Address, just like SP does. Of course, you would then use a form of GO to continue execution.

Examples:

   PP @14
   PP SEMANT

GO -- Continue execution from a trap. There are several forms:

   G     same as GO, simply continues until next trap.
   GC    GO continuous.  Subsequent traps are reported,
         but executing continues without stopping.  You
         stop only at command prompts, then CTS or CEV.
   Gn    GO for n (decimal) traps, then stop on n+1.
         SEB traps are included in the counting. G0 = GO
   GT address - GO, but set a trap at the given address.
         This is a temporary trap, cleared when sprung.
         Typically this is done to trap a little ahead.
         GT *+6

X -- Execute the instruction at the current trap, and stop. This is similar to GT, but you don't have to compute the next instruction address. If you GO after X, you continue with the next instruction. Normally, GO executes the trap instruction and continues with the next instruction. The advantage of X is that it lets you execute the trap instruction so you can examine the before and after effects of this single instruction. You can't X again to single-step instructions, but you can "GT *", then "X", etc. You can't use X with SEB traps.

QUIT -- From any trap, this exits from the Emulator with rc=3.

15.4  DUMP RECORD Command

The DUMP RECORD command dumps a hexadecimal and character view of a particular record within the currently attached subfile into the active file.

Syntax

IN ACTIVE [CLEAR|CONTINUE] -- The output from the DUMP RECORD command always goes to your active file. If your active file is empty, you do not need this IN ACTIVE prefix. But if your active file isn't empty, to avoid the prompt "OK to clear?" or to append the output to your active file's current contents, use the IN ACTIVE prefix with either the CLEAR or CONTINUE option respectively.

<key> is the external form of the key of the record to be dumped. If the key contains quotes or semi-colons, or leading or trailing blanks, you must surround the key with quotes and double any interior quotes.

The dumped information looks something like this:

   d10db1  0000    0001c000  01d90001  40000101  002e0001   |.....R.. .......|
   d10dc1  0010    002a0001  01000180  0022d481  8995a385   |..........Mainte|
   d10dd1  0020    95819583  85408195  8440c485  82a48787   |nance and Debugg|
   d10de1  0030    89958740  c3969494  819584a2  00042006   |ing Commands....|
   d10df1  0040    05030004  20060503  00690001  00652006   |................|
   d10e01  0050    05030001  90000500  01000101  00550001   |................|
   d10e11  0060    00510001  20004c00  010048e3  8885a285   |......<....These|
   d10e21  0070    40839694  94819584  a2408199  85408785   | commands are ge|
   d10e31  0080    95859981  9393a840  9985a2a3  998983a3   |nerally restrict|
   d10e41  0090    858440a3  9640a388  96a28540  a689a388   |ed to those with|
   d10e51  00a0    40d4c1e2  e3c5d940  979989a5  89938587   | MASTER privileg|
   d10e61  00b0    85a24b                                   |es..............|

The first column contains a range of physical memory addresses where the record was placed when it was dumped. This is usually of no value to you. The second column contains the relative location within the record of the first byte of information occurring on each line. The 3rd, 4th, 5th, and 6th columns are the hexadecimal representations of four consecutive bytes of data within the record. Sometimes a range of relative locations is shown with a single 4-byte value, which means this value is duplicated through the entire range. The last column, with surrounding vertical bars, is the character representation of the data within the record for this line of the dumped information. Unprintable characters are output as dots (periods).

15.5  Object Deck Maintenance

==> Object Deck Maintenance

These commands can be used to create object decks that can be loaded with a Spires system. The proper subfile must be selected before the command can be issued. For example, one must SELECT FORCHAR prior to a DUMP FORMAT command.

Command Syntax:

where <object> is the name of a SPIRES object deck data set, <ep> specifies the entry-point for the created object deck, <type> specifies the type of dump from the following:

<item> specifies the key of the record to dump, and may begin with $ to specify "System account".

CHAR, MSTR and GOAL require "attach <rec> of <file>" or "select <subfile>" before they are used. The others are best done when the appropriate subfile is selected, such as FORCHAR with FORMAT.

Note: DUMP SYSProc should only be done after $FILEDEF has been processed, because updates in the DEFQ are not picked up by the DUMP SYSProc command. This usually means waiting one day.

The source for TGRP.OBJ comes from $TRANSACTION.GROUP in RECDEF.

You would then move these objects into the ~/USPIRES/spisrc/obj directory. Objects are created in the SPIRES files directory when SPIRES itself creates them.

These commands are meant for system's maintenance. They are NOT general user commands, only MASTER mode commands.

Special note: DUMP ACTIONS (CLEAR|CONTINUE|KEEP) generates a list in your Active File of the syntax of all SPIRES actions.

==> Message Maintenance

There is a MSGDATA.TXT file that is $COPY'd by MSG.TXT, and this file must be rebuild whenever SYSTEM MESSAGES are changed. MSG.TXT is then recompiled to create the MSG.OBJ and MSGO.OBJ that are included in SPIRES/SPIBILD links.

The easiest way to create MSGDATA.TXT is with the following command:

This command is a Macro pair (GEN -> .GEN.MESSAGES) that creates two files: MSGDATA.TXT and MSGDOC.TXT, of which only MSGDATA.TXT is $COPY'd. The other is just a simple text file that associates message numbers with their message.

You would then move these text files into the ~/USPIRES/spisrc/pl360 directory. Then, in SPIRES, "..compall pl360.msg" to create the Object decks, which then must be moved into the ~/USPIRES/spisrc/obj directory.

16  Setting Locks in SPIRES

The ORVYL "lock" facility provides you with a way to guarantee that a given process is only being performed by one person. Commonly it is used to insure that only a single user is updating a specific record at one time. Both private locks and shared locks are available, and up to eight locks total may be set at one time.

(Another kind of lock, called an "attach lock", keeps the ORVYL data sets of a file attached even after the user has attached a different file. The next section [See 16.1.] discusses attach locks.)

To set a lock, issue the SET PRIVATE LOCK or SET SHARED LOCK command:

where "value" is an alphanumeric string from one to 40 characters long. It may not begin with a numeral, though numerals may be included within the value. The commands may be abbreviated to SET PLOCK or SET SLOCK.

A lock is cleared by exiting SPIRES or issuing the CLEAR LOCK command:

Setting a private lock means that no other user may subsequently set the same lock until you have cleared it. For example, if you have issued the command SET PRIVATE LOCK LOMOND, then if another user tries to set the same lock:

Setting a shared lock means that another user may set the same shared lock but may not set it as a private one. In other words, if you issue the command SET SHARED LOCK NESS, then another user can SET SHARED LOCK NESS but not SET PRIVATE LOCK NESS:

The error message "Already locked or cleared" is also displayed if you try to clear a lock that has not been set or one that has not been set by you:

A protocol designed to transfer and update a record might set a lock on the record key to insure that no other user could employ the same protocol at the same time to update the same record. The other user would find that the record key was locked; the protocol would probably test the success of the SET PRIVATE LOCK command using $YES or $NO:

Another method of setting locks uses a different lock mechanism though it is used similarly to private locks:

Using this method you can only lock one value at a time; no one else may set the same lock. Note that no value is given in the CLEAR LOCK command when this method is used. Generally, because of its versatility, the first method is the preferred method of locking values. However, the second method does allow you to use some special characters, which may not be used (with the exception of a period) with the first method.

The case (upper, lower or mixed) of the locked value is not significant for either method of locking. The commands SET PRIVATE LOCK ABC and "SET PRIVATE LOCK abc" are equivalent.

16.1  Setting Attach Locks in SPIRES

The SET ATTACH LOCK command, used primarily internally in SPIRES and SPIRES programs like Prism and Folio, can be used to keep some data sets of a file attached even when the file itself is basically detached, e.g., via a CLEAR SELECT command. This is useful for files that would be attached and detached regularly during the course of a session. SPIRES uses its own version of the command to keep the $FORMATS and $FILEDEF files attached to each user's session.

SET ATTACH LOCK comes in two forms:

The effect of attach locking is that the MSTR, DEFQ, CKPT, RES and REC1 data sets are not detached when the file itself is detached. Therefore, on the next SELECT of that file, SPIRES understands that the attaches are locked and does not have to incur the overhead of re-attaching them.

The maximum number of files that can be locked in this manner is four (not counting the two files $FILEDEF and $FORMATS that SPIRES always keeps attached).

Since the command is meant for internal use, please consult your SPIRES consultant before using it in any applications of your own.

17  Subfile Tables

The SPIRES capability called "subfile tables" provides a way to define relational tables based on any SPIRES subfile you can select. The purpose of SPIRES tables is to eliminate the incompatibilities between SPIRES hierarchical subfiles and RDBMS tables. Tables may exist only virtually, for the duration of a session, or they may be generated and saved as a separate and permanent data set, accessible in SPIRES through the ATTACH SET command.

Their chief advantage is that they are compiled when they are created; using them is more efficient than using other ways of generating RDBMS tables, such as the current DEFINE TABLE implementation. Also, SPIRES subfile tables, starting from the output form of the SPIRES source data, present the data using SQL data types, such as "date", "integer", etc. Not only does that make the data output from SPIRES compatible with table data from the target or other tables, it also means that when SPIRES is able to interpret SQL search commands (which it cannot yet do), SPIRES will properly handle the data-typing of the search value for comparison with data in the subfile table.

As suggested above, subfile tables are only partially implemented; this documentation is somewhat fragmentary as well. At some point in the future, using tables, you may be able to issue SQL-like queries against a SPIRES subfile using a subfile table as an intermediary. Additionally, the DEFINE TABLE command will likely incorporate this feature, improving its efficiency as well. Your suggestions and requests for future development are welcome.

You create a SPIRES table with the DECLARE TABLE command, which, like most other DECLARE commands, usually appears within a protocol, preceding a set of descriptive declaration statements. [See 17.1.] Other commands you may use to work with tables include DEFINE SET, DEFINE DISPLAY SET and SET TABLE, using the THROUGH PATH prefix. [See 17.2.]

17.1  Establishing a Subfile Table: The DECLARE TABLE Command

The DECLARE TABLE command defines a table that follows the rules and constructs of SQL using elements selected from the selected SPIRES subfile. It either names a table definition stored elsewhere, which is then loaded into SPIRES memory for your use, or it announces that the table definition statements follow (in the standard SPIRES format, "element = value;"), ending with an ENDDECLARE command. In the latter situation, the DECLARE TABLE command can be issued only from within a SPIRES protocol.

This section describes the DECLARE TABLE command, including the parts of the table definition; the next section describes how to use a table once it is defined and declared.

The syntax of the command is:

The "tablename" is a name (1-16 characters, no blanks) you will use in a subsequent command to identify the table you want to use. You may declare multiple tables, each with its own DECLARE TABLE command.

You use the WITH DECLARE prefix when the table definition has been previously created and stored in the public TABLES subfile or in another subfile of tables you have access to. This subfile should be compiled using the DEFINED-BY = $TABLE record definition. The "record-name" is the ID of the table definition, the key value of a record in a previously declared table subfile. [See 6.2.] An example below demonstrates this usage.

The Table Definition

In the simplest case, the DECLARE TABLE command is followed by the table definition, executed from within a protocol. That part of the protocol has these commands and statements:

The table definition is comprised of column definitions, which describe how elements in the selected SPIRES subfile (the source elements) should be mapped into columns of the table.

Each new column definition begins with a COLNAME statement (the key of the COLUMNS structure). For each column you want, you enter a column name, and you usually name the source element for the column (in the SOURCE.ELEM statement). The statements in the COLUMNS structure are each described below.

To create a column's value, SPIRES generates the output form of the source element. That value is then processed by system-defined processing rules for the specified COLTYPE (see below), creating the proper form of the column value. Because the COLTYPE processing rules are fixed and are based on SQL data types, a column of type DATE in one table will have exactly the same format as one in a different table, insuring compatibility with SQL data base programs.

The DECLARE.SUBFILE statement appears outside of the column definitions, i.e., not within the COLUMNS structure, if any of the elements are defined in the DATA MOVE DECLARES subfile or any other "element definition" subfile, that is, a subfile defined with the $ELEMENT record definition. Such elements would be referenced in the column definition with the DECLARE.KEY statement, described below. The DECLARE.SUBFILE statement thus names the subfile in which the elements named in DECLARE.KEY statements are defined.

The SOURCE.STRUCTURE statement also appears outside the column definitions. This statement may be helpful or even necessary as an aid to enable SPIRES to properly access a desired source element (by eliminating duplicate element names). Also, if source elements are referenced outside the source structure, this statement will ensure that rows will be generated only for occurrences of the source structure.

Other statements also appear outside the column definitions. These statements have meaning to various SPIRES processes which have been built to deal with the generation and movement of data (e.g. PERFORM TABLE CREATE and PERFORM TABLE MOVE). These statements are:

The entire table definition ends with an ENDDECLARE command.

Statements (Elements) of a Column Definition

Except for the required COLNAME statement, all statements in a column definition are optional. A column definition with only a COLNAME and a SOURCE.ELEM statement, with perhaps a COLTYPE statement, is common. Most of the other statements are for special situations.

After COLNAME, the first group of statements is concerned with the source of the data, which starts here in its external form. The second group describes how that output data from the source is manipulated into an appropriate SQL data type or how SPIRES in the future will interpret SQL search commands against the data.

COLNAME = column-name;

This required statement gives the name of the column as you want to refer to it using SQL. It should follow SQL column name rules, e.g., 1-16 characters long; SPIRES does not verify that the name is a valid SQL column name, however.

SOURCE.ELEM = elem-name;

This is the name of the element in the selected subfile that will be the source for the column data. Its name may be preceded by structure names in the "structure1@structure2...@elem-name" format to identify the specific element if the element name is not unique in the subfile.

If the DECLARE.KEY statement is used (see below), then the element named in the SOURCE.ELEM statement is used as a "redefined" element, with its internal values being used as the internal values for the declared element. (Its use is equivalent to the "FOR element" in a DECLARE ELEMENT command.) Either the element definition may have a REDEFINES statement in it, or the SOURCE.ELEM statement may appear here; but you may not have both, and neither may appear with SOURCE.IN (see below).

DECLARE.KEY = elem-name;

This is the name of an element defined in an "element definition" subfile (named in the DECLARE.SUBFILE statement; see above). It is the key of a goal record in that subfile; the goal record is an element definition similar to a dynamic element definition. SPIRES will use that element definition to create external values, which will then be used as input values to create column values.

The DECLARE ELEMENT command (to which this is related) has the mutually exclusive options of "FOR element" and "IN structure". Similarly, you may use SOURCE.ELEM if you need the "FOR element" option with your declared element here (see above); or you may use the SOURCE.IN statement for the "IN structure" option (see below).

SOURCE.IN = structure-name;

Used only in conjunction with the DECLARE.KEY statement (above), the SOURCE.IN statement names a structure in the selected subfile; for each occurrence of the structure, an occurrence of the DECLARE.KEY element will be generated. SOURCE.IN and SOURCE.ELEM may not both be specified; similarly, SOURCE.IN is incompatible with a REDEFINES statement in the declared element definition.

SOURCE.OCC = n;

This statement may be coded to indicate that the column value is to be derived from a particular source element occurrence. This value generally would be represented by an integer that designates which source element occurrence is to be used in generating a particular column. For example, if the source element ADDRESS is a multiply occurring element and the first occurrence holds the Street address and the second holds the City portion, then you can use SOURCE.OCC to direct the two portions to separate columns. For column STREET you would code "SOURCE.OCC = 1;" and for column CITY you would code "SOURCE.OCC = 2;".

The statement "SOURCE.OCC = N;" provides a variation of this statement and may be used to deal with a specific situation that may arise when generating a table from a SPIRES record. The "N" value tells SPIRES to assign the column value to the 1st, 2nd ... or "n"th occurrence of the field where the "n" value takes on the ROW number for the Nth row generated by a second element.

An example of the usage of both types of SOURCE.OCC usage will be better than all of these words.

Suppose we have a record that looks like this:

As an illustration for "normal" Source.Occ values check this:

Now suppose we want to "line up" the XX values with the corresponding XXNUM values. This is shown in the following example:

SOURCE.BUILD = value;

A particular Column may be described as being derived from the multiple values of a source subfile data element. This new field can be defined for any COLNAME that also contains a SOURCE.ELEM value and is coded as follows:

When this particular construct is coded the resulting output for the specified column will consist of a single value made up of the multiple source values concatenated together with any specified "Source.Build" string between successive values.

DEFAULT = value;

If the SOURCE.ELEM does not have an occurrence, the value given here will be used instead. This value is treated like an external form of the source element; in other words, it will be "processed" by the system-defined rules for the particular COLTYPE (see below) to create the column value.

Note: "DEFAULT = X'hex-characters'" may be used to supply a hex value.

LITERAL = value;

You may choose for a column to have a literal value not derived from elements in the source subfile by specifying the LITERAL statement instead of the SOURCE.ELEM statement. You include the desired value in the statement, e.g., "LITERAL = 9/1/1998;". This value is treated like an external form of the source element; in other words, it will be "processed" by the system-defined rules for the particular COLTYPE (see below) to create the column value.

Note: "LITERAL = X'hex-characters'" may be used to supply a hex value.

Below is the second group of commands, which describe how SPIRES should manipulate the source data to normalize it for RDBMS tables and, in the future, how SPIRES will interpret SQL commands issued against this data.

COLTYPE = type;

The COLTYPE statement describes the type of the column as one of a standard set of data types recognized by SQL. SPIRES has an Inproc and Outproc rule string defined for each type. The source data (in its output form) is processed through the Inproc and then the Outproc to produce the column data. This has the effect of normalizing all data of a particular type, putting it in a form that is recognized and understood by SQL and RDBMS data bases. The processing rules are system-defined; columns with a COLTYPE of DATE will have the same format in all SPIRES applications. (At this early stage, there is still some flexibility in the definitions of these types; contact your SPIRES consultant if you have design suggestions.)

COLTYPE values are CHAR, DATE, TIME, BITS, PACK, REAL, INT, HEX, and DELETE. [See 9.1, 9.2 for more information about DELETE processing.]

COLWIDTH = n;

This is the allowed width of the column data, expressed in an integer. This value essentially becomes a piece of "element information" for the column, equivalent to the WIDTH statement in an eleminfo packet.

DECIMALS = n;

The DECIMALS value is an integer that represents the number of decimal places that a column of type PACK will have.

ISKEY;

This flag statement, which takes no value, indicates that the column is the key for the table. A key is necessary when SPIRES is building a table, since the table exists as a SPIRES database. A table may also have multiple keys. In this case you would code "ISKEY;" for each COLNAME that is to be included. Also, these columns must be in sequence as the first columns of the table. If there is no unique key column, omit this statement and SPIRES will generate a slot key for each row of the table.

RDBMS_COLUMN = value;

This statement may be used to specify an alternate COLUMN NAME that corresponds to the name of the actual RDBMS column name. This value may be up to 32 characters in length.

This value and the RDBMS values below are used by SPIRES to deal with data transformations to and from SPIRES subfiles to RDBMS databases.

RDBMS_DATATYPE = value;

This statement may be used to specify the RDBMS data type for the column in the RDBMS database.

RDBMS_DATALENGTH = n;

This statement may be used to specify the RDBMS data length for the column in the RDBMS database. This value may differ from that given in COLWIDTH above since the column data may be expressed in differing ways in the different systems.

The final two statements are miscellaneous ones:

COLUMN.OPTIONS = option1, option2, ... optionN;

This statement is a multi-valued field through which you may indicate various options to direct the table processor in its data transformation task. COLUMN.OPTIONS is expressed as: "COLUMN.OPTIONS = Option1, Option2, ... OptionN;" where the OPTION(n) vales may take on the following values:

The BYPASS option tells SPIRES to ignore this column when looking for record values that have changed for change generation processing. [See 7.1.]

The SINGULAR option tells SPIRES that the element is multiply occurring by definition but that the element occurs only once in the records that will be processed. This can be used to circumvent S324 errors, which signal that too many source elements are being used to create multiple table entries. The limit is 16 multiply occurring elements. If you are thwarted by this limit, add this statement to any columns whose source elements occur only once but are defined in the SPIRES file definition as multiply occurring.

The DEPENDENT option tells the Table processor that the current source element is dependent upon the existence of other source element values in order for a row to be created. In other words -- if the only fields generated within a particular row of a table are "Dependent" fields then the row will not be generated.

The NOSUBTREE option tells the Table processor that the source element is not within the subtree given by the SOURCE.STRUCTURE value. This will ensure that a second source element with the same name is used.

COMMENTS = comments;

You may add comments as needed for the column definition.

Declaring a Pre-Defined Table

To use the same table definition in multiple contexts, you may store its definition in a tables subfile and refer to it there.

The subfile can be either the public subfile called TABLES or a subfile of your own ownership, whose goal record-type is defined with the $TABLE record definition:

Table goal records have the same elements as the table declaration above, plus a key element called ID (a 1-16 character name, no blanks, preceded by your account number) and several other optional record-level elements, such as AUTHOR, COMMENTS, etc. The only one that is not obvious, perhaps, is the SUBFILE element, where you name the subfile with which the table will be used. (The SUBFILE element seems to be primarily for your convenience as a reference comment, not an element used by table processing.)

To use a pre-defined table, you first select the subfile for which the table is defined. The example below shows that subfile being on the primary path, but you may select it through a subfile path if you need another subfile to be the primary.

Next, select the tables subfile through a new path.

Next, issue the SET DECLARE PATH command, naming the path you just established to tell SPIRES where to find table definitions that will be referenced in future commands. [See 6.2.]

Next, declare the table, referencing the key (the ID) of the table goal record that you want to use.

The table you've named Newtable, based on the stored table definition GQ.DOC.RACE1TABLE, is ready to use. If you had selected the RACES subfile through a path rather than as the primary subfile, you would precede the DECLARE TABLE command not only with the WITH DECLARE prefix but also with the THROUGH prefix to name the subfile path for which the table is being defined.

17.2  Using a Subfile Table

With subfile tables, you can represent a hierarchical SPIRES goal record-type as a collection of multiple tables, dynamically flattening the structural records for use with SQL commands (in the future) or to create data sets to pass to relational data base management systems as a table version of your SPIRES subfile. This section describes how you set up the tables in SPIRES for whatever uses you put them to.

Notice that the full details of "normal" table use are described first, followed by explanation of a possibly simpler method using output control.

Tables and Paths

Tables are declared and used through paths. You select the source subfile (either as the primary subfile or as a subfile on a path), declare the table, and then establish a new path that uses that table. For example, assuming these commands are being executed from a protocol:

In the primary path, you continue to work with the original subfile, with the elements of the FAMILY goal records. But in path 1, you now have a different "view" of the subfile, essentially with a new set of goal records (the rows of the table), with new "elements", the columns of the table. Commands issued through the path that refer to elements, such as SET FORMAT $REPORT or SHOW ELEMENT CHARACTERISTICS, would name or display the new column elements of the path, not the original source file elements. You cannot display a record in the primary directly through the table path, however; for instance, a command like THROUGH 1 DISPLAY ALL would not work.

Suppose, for example, that in the FAMILYMEMBERS table are a FAMILY_NAME column derived from the NAME element in the FAMILY subfile's goal record-type and a FIRST_NAME column also derived from the NAME element.

Unlike other paths, however, a table path is used indirectly. To see data through the table, you must generate a set from the records you want from the original subfile using the table definition as the blueprint for the generated data. For a display set, the procedure might look like this (continuing the example from above):

Notice that the data is generated on the primary path, using the table established in a separate path as a database template.

Commands you use to work with an established table include the SET TABLE and CLEAR TABLE commands:

As the syntax suggests, tables are always set through a path other than the primary one. The "tablename" must match the name of a declared table. [See 17.1.] The SET TABLE command establishes the table within a path (it must be a new path), and the CLEAR TABLE command clears the path and the declared table as well.

Typically, once you set the table path, you set a format, though you may certainly use just the standard SPIRES format. Additionally, you may want to use the SET FILTER command with the FOR * option to limit the "row" output to the rows you want; rows not matching the criteria expressed in the where clause would be eliminated from the output. For example, after setting the $REPORT format above, you could add the following command (assume the table has a SEX element in its definition):

The DEFINE SET command with the TABLE option defines a set using the table's fields as the elements for the set. It can be used with a display set as well (DEFINE DISPLAY SET). The TABLE option hence replaces the lists of elements you would normally place on the command:

where "tablename" is the name of a currently declared table.

Working with a Table-Generated Set

If you generate a regular set, SPIRES creates a sequential data set (not a tree-structured data set), which it stores on disk. You can use the ATTACH SET command to work with the set:

This command works only for sets generated via a table. (Likewise, a table-generated set does not work with normal set processing, e.g., "FOR SET setname".) Again, you do not have key access to the "records" in the attached set; for example, a "DISPLAY key" command will fail. (In general, you would process the records through Global FOR.) But otherwise, the attached data set looks and behaves like a regular SPIRES data set or non-indexed subfile.

Using Tables with Output Control

To use tables with output control, you first declare the tables as described above. Then you declare the output control packets, which would include any packets that you want to use the tables. Those packets would contain the TABLE.NAME statement (to name the table you wanted to use; again, it must be a table that is already declared at the time the output control is declared) and, optionally, a TABLE.WHERE statement, which internally establishes a "SET FILTER FOR * WHERE clause" command using the WHERE clause you provide.

One advantage of using output control with tables is that you do not have to set up the paths yourself; SPIRES does the necessary work behind the scenes.

You can use output control's change generation capabilities with tables to generate changes in the appropriate table form. [See 9.]

Using Stored Tables

Table definitions can be stored in a system subfile called TABLES, in which case using them generally means establishing another path (in which the TABLES subfile is set) as a declare path. That process usually looks like this:

The table is now available just as if it had been declared with the DECLARE TABLE command alone. [See 17.1.] You would probably then want to establish the table path in a new path, not the same one as the declare path.

17.3  Establishing a Subfile Input Table: The DECLARE INPUT TABLE Command

The DECLARE INPUT TABLE command defines a table that may be used in conjunction with Input Control processing to map table columnar data elements into a set of data elements of a hierarchical record. It either names an input table definition stored elsewhere, which is then loaded into SPIRES memory for your use, or it announces that the table definition statements follow (in the standard SPIRES format, "element = value;"), ending with an ENDDECLARE command. In the latter situation, the DECLARE INPUT TABLE command can be issued only from within a SPIRES protocol.

This section describes the DECLARE INPUT TABLE command, including the parts of the table definition; the next section describes how to use a table once it is defined and declared.

The syntax of the command is:

The "tablename" is a name (1-16 characters, no blanks) you will use in a subsequent command to identify the table you want to use. You may declare multiple tables, each with its own DECLARE INPUT TABLE command.

You use the WITH DECLARE prefix when the table definition has been previously created and stored in a subfile of input tables you have access to. This subfile should be compiled using the DEFINED-BY = $INPUT.TABLE record definition. The "record-name" is the ID of the table definition, the key value of a record in a previously declared input table subfile. [See 6.2.]

The Input Table Definition

In the simplest case, the DECLARE INPUT TABLE command is followed by the input table definition, executed from within a protocol. That part of the protocol has these commands and statements:

The table definition is comprised of column definitions, which describe how "columns" of the input table should be mapped into elements in the selected SPIRES subfile (the destination elements).

Each new column definition begins with a COLNAME statement (the key of the COLUMNS structure). For each column you want, you enter a column name, and you usually name the destination element for the column (in the DEST.ELEM statement). The statements in the COLUMNS structure are each described below.

The following statements appear outside the column definitions. These statements are:

The entire table definition ends with an ENDDECLARE command.

Statements (Elements) of a Column Definition

Except for the required COLNAME statement, all statements in a column definition are optional. A column definition with only a COLNAME and a DEST.ELEM statement is common. Most of the other statements are for special situations.

After COLNAME, the first group of statements is concerned with the source of the data, which starts here in its external form.

COLNAME = column-name;

This required statement gives the name of the column from the input table which provides the source value to be stored. Since the source data has been defined as a SPIRES input table the column name is really an input table's data element name.

COLTYPE = type;

The COLTYPE statement describes the type of the column as one of a standard set of data types. The primary reason to include this field is that SPIRES commands are available to create SPIRES Record definitions (RECDEF) from Declare Input Table structures. SPIRES can then use this information to generate an Inproc and Outproc rule string defined for each type.

DEST.ELEM = elem-name;

This is the name of the element in the selected subfile that will be the destination for the column data. The dest.elem name must be of a data element within the destination structure (DEST.STRUCTURE) if that statement has been coded. There are exceptions to this rule if the ISKEY statement is included for this particular Colname.

DECLARE.KEY = elem-name;

This is the name of an element defined in an "element definition" subfile (named in the DECLARE.SUBFILE statement; see above). It is the key of a goal record in that subfile; the goal record is an element definition similar to a dynamic element definition. SPIRES will use that element definition to create external values, which will then be used as input values to create destination element values.

ISKEY;

This flag statement, which takes no value, indicates that the column is the key for the destination structure. If the Dest.structure statement has been coded the Dest.elem value is not necessarily restricted to the be within the structure. In fact, it is necessary to include all Dest.elem "keys" that are needed to "locate" the input row within the destination goal record.

RDBMS_COLUMN = value;

This statement may be used to specify the COLUMN NAME that corresponds to the name of the actual RDBMS column name that provided the source value for the input table column. This value may be up to 32 characters in length.

COLWIDTH = value;

This statement may be used to specify the width of the columnar value. This value takes importance especially if the Declare Input Table definition is to be used to generate a RECDEF record. In this case the value should be coded for any key data element (see ISKEY statement above).

18  Packed Decimals in SPIRES

SPIRES supports a special type of "packed decimal", which is a form for numeric data values that possibly have a decimal portion. The support is system-wide in SPIRES: elements may be stored as packed decimals in subfile records, variables in protocols, formats or USERPROCs may be packed decimals, and general arithmetic operations may be done with packed arithmetic. Compared to floating-point ("real") or integer arithmetic, packed decimal arithmetic has a larger range, more precision and more accuracy.

Packed decimals in SPIRES are not exactly the same as they are in other computer languages. Because they are much more powerful in SPIRES, there are many more capabilities and many more details involved in their use. The first few pages of this chapter may be considered a "primer" to packed decimals in SPIRES. Details on exceptional uses or internal handling are provided later in the chapter.

Allowed Range

The packed decimal data type has the largest range of any of the numeric data types supported by SPIRES, which are binary, floating point, and packed decimal. Arithmetic can be performed on values having as many as thirty places of precision and an exponent from -128 to 127. ("Precision" and "exponent" are terms defined below.) Data elements whose values will not be used for arithmetic can have up to 509 places of precision.

Unlike the standard packed decimal data type of other systems, the SPIRES version allows the input value to have a decimal portion, such as "1.36". That statement may sound peculiar, but the word "decimal" in "packed decimal" refers not to the type of input value but to the manner of storage, cf. binary or hexadecimal.

Input Forms for Packed Decimals

Packed decimal values may be input in one of four forms:

Internal Adjustment and Storage

There are four ways that an input value can be transformed to a packed decimal value:

In SPIRES, packed decimals are not stored internally the same way as in other systems. When a value is input, any decimal point is removed and an extra byte is added to the end of the value, telling SPIRES in effect where the decimal point belongs. The fact that the extra byte exists is only important if you want to access packed decimal elements in SPIRES files from a non-SPIRES program.

The adjustment described above creates the two components of a packed decimal: the "integer" and the "exponent". Using the forms of input values shown above, we can say that all input values are converted to the exponential form, where the portion of the value preceding the E is the "integer" and the portion that follows is the exponent. The exponent represents where the decimal place exists in relation to the integer.

Below are some numeric values shown with their exponential equivalents:

The exponent tells the number of places to move the implied decimal point that follows the integer, moving it to the right if the exponent is positive and to the left if negative. For example "120E_1" indicates that the implied decimal point after the zero in 120 should be moved one place to the left, resulting in "12.0".

The number of digits in the integer is the "precision" of the value. The precision of any value (whether it is in exponential form or not) can be determined by counting the number of digits in it, beginning with the left-most non-zero digit and continuing to the right-most digit, zero or not:

Note that the first two values above, though both "equal" to 12, are not exactly the same in packed decimal notation: "12E0" and "120E_1". The two values have different exponents (0 and -1) and different precisions (2 and 3). Still, if SPIRES compared the two values in arithmetic, they would be considered equivalent.

For the value 0 (0E0), be aware that its precision is 0, even though there is one digit in the integer. Remember that you start counting the number of digits with the left-most non-zero digit. Because we will use the terms "precision" and "number of digits in the value" somewhat interchangeably, the second term does not refer to leading zeroes.

In most SPIRES applications that use packed decimals, the difference in precision between 12 and 12.0 is not a valuable distinction; in fact, it may be undesirable. If the values are money values, for instance, you might want to enter 12 or 12.0, but the value you mean (and want returned) is "12.00". Thus, functions and processing rules (notably the $DECIMAL function and $DECIMAL system proc) are available that can give the value a fixed number of decimal places (that is, setting the exponent) and then altering the precision of the value to fit. For example, if you process a packed element through the $DECIMAL system proc, requesting two decimal places, the values 12 and 12.0 would both be converted to 12.00 (1200E_2). On the other hand, a value such as 11.996 would also be converted to 12.00: extra places of precision are removed, and appropriate rounding occurs.

Other functions and system procs can be used to perform the opposite task: giving values a fixed precision and altering their exponents as appropriate. [See 18.1.3.]

Lengths of Packed Decimals

The internal storage length of a packed decimal value depends on its precision, i.e., the number of significant digits you want to store:

where "length" is the length of the value in bytes and "digits" is the number of digits in the value that you want to store. If the result is not an integer, it should be rounded up to the next integer.

Here are some examples using this formula:

The formula shows one reason why you might want to fix, or at least limit, the precision of a value: to fix, or at least limit, the length. That type of processing is done by the $PRECISION or $WINDOW functions and system procs.

When you declare a packed decimal-type variable in a vgroup, you either include a LENGTH statement or let the length default to 4 bytes. Remember from the above formula that the length and the precision are related; here is the formula reversed:

meaning that a four-byte packed decimal variable can store only five places of precision. For a pre-allocated packed variable, the input value of a four-byte packed variable will lose precision if it contains more than five places of precision. So, for example, the input value 123456 would be converted to 12345E1. Consequently, if you want to handle large packed decimal values in vgroups, be sure that you raise the length from the default of four bytes to one suitable for your needs. Eight bytes, allowing 13 places of precision, is usually a good choice.

Output Forms of Packed Decimals

When packed decimal values are displayed, they will appear in one of the four input forms, depending on the value and on how (or if) it is processed for output. For example, using the $DECIMAL function or system proc for output, the value could be displayed in decimal notation with a fixed number of decimal places. However, by default (that is, through a $PACK.OUT or A82 processing rule for an element, or via the "/*" command for a variable), SPIRES uses the following rules to determine the form:

Users often control the output form used by applying processing rules and functions, such as the $EDIT function or system proc, which lets you specify an edit mask that is applied to the value on output. As mentioned above, the $DECIMAL function or system proc is commonly employed to force all values to have the same number of decimal places, giving them a uniform appearance.

Packed Decimals as Record Elements

Numeric data elements that may have non-integer values (such as monetary figures) are usually declared as packed decimal elements. About a dozen system procs and a half dozen actions supporting packed decimal elements are available, providing various tests and conversions; these are described later. Listings of them are available by issuing the commands EXPLAIN ACTIONS, PACKED DECIMAL and EXPLAIN SYSTEM PROCS, FOR PACKED DECIMAL VALUES.

Below are examples of INPROC, OUTPROC, SRCPROC and PASSPROC strings that are commonly coded for packed decimal elements. Details on the system procs shown are provided in the reference manual "System Procs" or online through the EXPLAIN command.

Generally speaking, the stored form of packed decimal elements should have a fixed length (i.e., a limited amount of precision) and a fixed number of decimal places (i.e., a fixed exponent). Here are the appropriate statements for the file definition:

where "bytes" is the length in bytes and places is the number of decimal places to appear in the value. To help you determine the appropriate length, think of the longest value you expect for that element, determine its precision (including the decimal places you will allow), and then use the formula introduced earlier:

where the result is rounded to the next highest integer if it is fractional. (You might want to add an extra byte or two if you are not certain of the length of the largest possible value.)

Here is an example of an element using these processing rule strings:

The INPROC string first converts the input value to a packed decimal if it can ($PACK); if it cannot do so, a serious error occurs and the input value is rejected. The converted packed value is then verified to be greater than or equal to zero and less than or equal to 1000 ($PACK.TEST); again, an error will occur if either test fails. The value is then adjusted so that it has a single decimal place and is four bytes long. (If the value has too many decimal places, they are truncated, with appropriate rounding applied; if the value is longer than four bytes after any truncation, an error occurs and the value is rejected.) The OUTPROC converts the stored packed value for display.

The four-byte length was determined by figuring the longest allowed value (1000), including the number of decimal places allowed (1000.0), finding its precision (5), and applying the formula (5+3/2 = 4).

Here are some sample values as they would be processed:

If you decide to index a packed element like METER.READING, you would code a SRCPROC and PASSPROC like this:

where "bytes" and "places" are the same values used in the INPROC, and "elemname" is the name of the element being indexed.

If desired, you could include the $PACK.TEST system proc in the SRCPROC string so that search values outside of the allowed range of element values would be rejected.

Note: A limitation of packed decimal elements is that negative values may not be indexed properly for your needs. For example, the number -2 will be indexed between 1 and 3. A search where the equality operator is used (e.g., FIND READING = -2) will work properly, but a range search will not (FIND READING > 1 would retrieve records with "-2" values). If you must index packed values that may be negative and you must perform range searches, you should probably use floating-point (real) numbers rather than packed decimals.

If your packed decimal value is a monetary value, you might prefer to display it on output with an edit mask:

The $EDIT system proc converts the value from a packed decimal to a string, adjusting the value into the given mask. [See 19.] The $UNEDIT proc is used on input to strip off the extra commas and dollar signs of the input value, which would cause the value to be rejected by $PACK if they were there.

Though making a packed element fixed-length is recommended here, it is not required. If you will not be indexing the element, for example, you may not want to fix the length. [See 18.2.]

Packed Decimals as Variables

Packed decimal variables are frequently used, especially since the default arithmetic in SPIRES uses packed decimals. If a packed decimal variable is declared in a vgroup, it must be assigned a length, usually from 2 to 16 bytes long; if no LENGTH statement is coded, the default length is four bytes. (The shortest packed decimal value, 0, is two bytes long; the longest packed decimal that SPIRES can use to perform arithmetic is 255 bytes long, though values longer than 16 bytes will be "truncated" for arithmetic, and results will be no longer than 16 bytes.) Remember that the chosen length controls the amount of precision the stored variable will have.

To guarantee that a variable value will have a certain number of decimal places, the DISPLAY-DECIMALS statement can be coded in a variable definition, specifying the number of decimal places the variable will have when it is output. The function of this statement is similar to the $DECIMAL function.

Here is a sample variable definition in a vgroup:

A UNIT.COST variable can have 9 places of precision ((6*2)-3); any more will cause truncation of the value and appropriate rounding. On output, the value will be adjusted to have two decimal places.

The remainder of this chapter covers packed decimals more thoroughly, presenting this material in an expanded, more detailed manner.

18.1  General Information on Packed Decimals

The packed decimal data type has the largest range of any of the numeric data types supported by SPIRES (binary, floating point, and packed decimal). Arithmetic can be performed on values having as many as thirty places of precision and an exponent from -128 to 127. ("Precision" and "exponent" are terms defined below.) Data elements whose values will not be used for arithmetic can have up to 509 places of precision.

Unlike the standard packed decimal data type of other systems, the SPIRES version allows the input value to have a decimal portion, such as "1.36". Therefore, data stored as "packed" in a SPIRES file will not be read correctly if another program accesses those values directly and assumes they are packed in the standard manner. Such problems do not usually arise, however.

Though the range and power of packed decimals are much greater than those of integers or floating-point numbers, packed decimals are not unconditionally recommended for all numeric applications. Due to a sorting limitation discussed in detail later, you may not want to use them for indexed elements, for example. [See 18.2 for a complete discussion of packed decimals as data elements..]

18.1.1  The Components and Terminology of Packed Decimals

Several important terms are used frequently when discussing packed decimals: integer, exponent, precision, and magnitude. These specify components or characteristics of a packed decimal that are described in this section.

When a numeric value is converted to PACKED (for "packed decimal"), it is internally adjusted to have a variable length "integer", followed by an exponent representing where the decimal point exists in relation to the integer. For notating this form, an E is used to separate the integer from the exponent, and an underscore (_), as opposed to a minus sign (-), is used to indicate a negative exponent. (Both the integer and the exponent may be positive or negative; the underscore character for the exponent is used to prevent confusion with the subtraction operation.) The E indicates that the integer is to be multiplied by 10 to the power indicated by the exponent.

Below are some numeric values in "standard" notation and in packed decimal exponential notation:

As you can tell by the examples, the exponent indicates the number of places to move the decimal point following the integer, moving it to the right if the exponent is positive or to the left if negative. For example, 120E_1 indicates that the implied decimal point after the zero in 120 should be moved one place to the left, resulting in "12.0".

Notice from the examples that trailing zeroes are never discarded in the conversion; also, no value can have a positive exponent unless the value was input in exponential notation, as shown by the penultimate example. These details relate to the concept of "precision".

The integer will always be adjusted to have at least one digit, even if it is only a zero.

The "precision" of the packed decimal value can be expressed as a numeric value: the number of digits in the integer, counting the left-most non-zero digit as "one", and continuing through to the right-most digit, whether it is zero or not.

For example,

The "exponent" can have a range from -128 to 127 inclusive. The exponent represents the power of ten by which the integer is to be multiplied to get the packed value. When the exponent is negative, its absolute value is the number of decimal places in the packed value. For example, the value 12.35 (1235E_2) has an exponent of -2; the absolute value of -2 is 2, the number of decimal places in the value.

The sum of the packed value's precision and exponent is called the value's "magnitude", a number that is occasionally useful in handling packed decimal values.

Note: the magnitude of the value is also equal to one more than the "power of ten" assigned to the left-most non-zero digit of the non-packed value. For example, 12E_5 (or 0.00012) has a magnitude of -3, which can be calculated by adding the precision (2) and the exponent (-5), or by adding one to the "power of ten" assigned to the "1" (the left-most non-zero digit), which is "-4".

Here are some packed values and their exponents, precisions and magnitudes:

You may have noticed that each of the final three values was a different way to write 100 in exponential notation. You can vary the exponent and the precision of a value, but its magnitude will always remain the same. The ability to adjust the precision and exponent of a value without affecting its basic value will be useful later.

[EXPLAIN $PACKTEST FUNCTION.] for more details about how to retrieve Precision, Exponent, Magnitude, and other statistics about packed values.

18.1.2  The Input and Output Forms of Packed Decimal Values

There are four different ways in which packed decimals may be entered or displayed:

You, the user, have a certain amount of control over how a packed decimal will be displayed. For example, by using the $DECIMAL function or system proc, you can require all values to be displayed with a given number of decimal places in decimal notation.

By default however (that is, when packed decimal values are being displayed by action A82 or by the $PACK.OUT system proc for elements, or with the "/*" command for variables), SPIRES uses the following rules to determine which form to use:

Infinity is always displayed as "-@@" or "@@".

Here are those three ways of looking at one hundred again:

In the first example, the exponent is positive, so exponential notation is used. In the second, the exponent is zero, and thus integer notation is used. In the third, since the exponent is neither positive nor zero, and the magnitude is not less than -3, the value is displayed in decimal notation.

Disregarding the rules for a moment, why wouldn't SPIRES display the first sample value "1E2" as "100"? The reason is that SPIRES does not change the precision or exponent of the value on output, even though "100" may be easier to read than "1E2". Suppose that the packed value 1E2 is stored in a subfile record and that when the record is transferred, the value is displayed as "100". When the record is updated, the value would be reconverted to "100E0" for storage, which is not precisely the same as "1E2", since the exponent and precision are now different. In other words, when SPIRES displays the value, it displays it in a form that, when reconverted to packed, would retain the value in its original packed form.

Notice again that in the example, whenever the exponent is negative, the number of decimal places displayed is the opposite of the exponent. That is, for "10000E_2", the value displayed is "100.00", which has two decimal places; and "2" is the negative of the exponent "-2".

18.1.3  Using Packed Decimals

Because packed decimals can be handled more accurately than floating-point ("real" type) values, it is generally recommended that you use packed decimals when you are working with non-integer (i.e., decimal) arithmetic.

Often SPIRES will make the decision to use packed arithmetic for you. For example, in "default arithmetic", SPIRES will do arithmetic using packed values. That is, if both operands of an arithmetic operation are of types other than PACKED, INTEGER or REAL (usually that means STRING or CHAR) then SPIRES will convert them to packed values in order to do the computation.

For instance, if the variables X and Y are not pre-allocated:

Note that if the first operand is PACKED, INTEGER or REAL, the second operand is converted to that type in order to carry out the computation. If the first operand is not one of these types but the second type is, then the first operand is converted to the type of the second for the computation.

Many functions are available for handling packed decimal values. These functions are typically used in protocols, formats and USERPROCs. Most of the capabilities of these functions are also available in processing rules for data elements, which are discussed later. [See 18.2.] All of these functions are described in detail in the reference manual "SPIRES Protocols"; you can use the EXPLAIN command to get information about them online.

The last three functions are particularly interesting and are worth further discussion. First however, it is useful to know a little bit about how packed decimal values are handled by SPIRES, and in particular, how many bytes of storage they require.

How Packed Decimals are Stored in SPIRES

A packed decimal in SPIRES can consist of a varying number of bytes in which the integer portion (and sign of the integer) is stored with a byte at the end that contains the exponent. The length in bytes of a packed decimal depends on the length of the integer, that is, the precision of the value. Each two digits of the integer are stored in a single byte. The last byte before the exponent byte contains one digit of the integer and a sign character. Pictorially, it looks something like this:

where each "|" character demarcates each byte and where each "d" is a digit of the integer, "s" is a sign character for the integer, and "exp" is the trailing byte that indicates the exponent. If there is an even number of "d"s in the value, the first "d" stored will be a zero (to fill up the byte). In fact, if the allocated length of a packed variable would be longer than the length needed for a given value, the first "d"s will be zero, as needed, to fill up the variable length. Such leading zeroes are ignored on output, however.

The length in bytes of a packed decimal value (including the exponent at the end) can be determined using a simple formula:

where "digits" is the number of digits in the integer of the packed decimal (i.e., the precision). If there is a remainder after the division by 2, the result should be rounded up to the next integer.

Here are some examples using this formula:

Note that the shortest packed decimal value (representing 0, shown above) is two bytes long.

It is worth noting that the formula is also useful when reversed:

which suggests that if a packed decimal variable has a length of four bytes, it can store a value having five digits of precision. [See 18.3.]

The first three examples above demonstrate the main point that the length of the packed value depends on the precision of the value. All three are "equal to" 100, but each has a different precision and so each has a different length. In some cases you may want to limit the precision of a value so that it is no longer than a certain length, or set the precision of a value so that it is a certain length, and those are the purposes of the $WINDOW and $PRECISION functions respectively.

The $PRECISION function will add zeroes to or subtract digits from the right end of the integer to give the value the specified precision. As this occurs, the exponent is adjusted accordingly. If the packed value 1E2 is to have three digits of precision, it would be changed to 100E0; the packed value 10000E_2, processed under the same conditions, would also be changed to 100E0.

The $WINDOW function works similarly except that it can only subtract precision (thus limiting the length) and cannot add it.

Earlier it was shown that when the exponent of a packed decimal is negative, the negative of that exponent is the number of decimal places the value would have. Thus, if the number of decimal places is fixed, all packed values will have the same exponent. The $DECIMAL function, which adjusts a packed value so that it has a fixed number of decimal places, has the opposite purpose of the $PRECISION function. The $DECIMAL function sets the exponent, which may alter the precision of the value; conversely, the $PRECISION function sets the precision, which in turn may alter the exponent.

When the $DECIMAL function changes the precision of a value, it either adds precision by appending zeroes to the value (for example, 100 to 100.00 if two decimal places are called for) or subtracts precision by removing digits from the end of the value and rounding it upward if appropriate (e.g., 32.974 to 32.97, but 32.975 to 32.98).

18.2  Packed Decimals as Data Elements

Numeric data elements that may have non-integer values (such as monetary figures) are usually declared as packed decimal elements. However, in some cases they should not be, while in others, special precautions (i.e., extra processing rules) should be taken in order to use them effectively. About twenty processing rules (actions and system procs) are available for handling packed decimals: to convert an input value to packed decimal, to test the packed decimal's range, to reconvert the packed decimal to string for output, and so forth. This section will discuss the main limitation of packed elements, and present sample processing rule strings that you can use to avoid any problems that may arise from that limitation.

The limitation concerns data element sorting in these four cases:

The type of sorting involved in the above processes (called "character-by-character sorting") will not work properly unless the values being sorted all have the same length and number of decimal places, and none of the values is negative. The other methods of record sorting (the SEQUENCE command and SPISORT) always work properly with packed decimals, and thus are not a problem.

The individual actions and system procs used with packed decimals are listed in the "File Definition" and "System Procs" manuals respectively, or may be shown online by issuing the commands EXPLAIN ACTIONS, PACKED DECIMAL and EXPLAIN SYSTEM PROCS, FOR PACKED DECIMAL VALUES.

Sample Processing Rule Strings

Below are some possible "recipes" to follow that suggest various processing rule strings to code, depending on how you want to use your packed decimal elements. Details about the processing (e.g., what types of errors can occur, what parameters are available) are discussed in the SPIRES reference manual "System Procs". The simplest rule strings are discussed first; later, rule strings to handle the sorting considerations mentioned above are discussed. Note too that the simplest rule strings are used only when no LENGTH statement is specified for the element; different rule strings are used (and described further down) when you want a fixed length for the element.

The Simplest Processing Rule Strings

If the packed element does not fall into one of the four categories of sorting considerations discussed above, then the processing rules may be as simple as:

These are the simplest rule strings possible for packed values, converting the string value to a packed decimal on input and reconverting on output. No other change is made to the value; the precision and exponent of the value will not change. Because the precision of the value is not set, the value may vary in length, as discussed in the previous section. [See 18.1.2.] Thus, you should not code a LENGTH statement for a packed decimal element with the above INPROC. (Another INPROC rule string can be coded to handle that situation; see the category "Rule Strings for Fixed-Length Packed Decimals" below.)

Another simple form is:

The $DECIMAL proc adjusts the packed decimal value to have "3" decimal places here; like the $DECIMAL function, the $DECIMAL proc sets the value to have a given exponent (the opposite of the number of decimal places). Such a declaration guarantees a certain amount of consistency from value to value; all values will have three decimal places. (You can specify another number instead of "3" if you like.)

The $DECIMAL proc can be placed on the OUTPROC instead of the INPROC if desired:

That means the stored value is the same as the input value, having the same exponent and precision, but for output, values will be adjusted so that they all have the same number of decimal places.

The $DECIMAL proc, by fixing the number of decimal places, may alter the value's precision. For instance, if the value has too many decimal places, the right-most digits are dropped and standard rounding will occur (14.3777 would become 14.378); on the other hand, precision may be added (14 would become 14.000).

Simple Rule Strings for Monetary Elements

If the packed element will be a money figure, you may want to use the following processing rule strings:

On output, the packed decimal value is processed through an edit mask, which adds a comma (if necessary) and a dollar sign to the value for display. On input, the value is first "unedited", that is, commas and dollar signs are removed, and then converted to a packed decimal, which is then adjusted to have two decimal places. You may design your own edit mask; instructions appear later in this manual. [See 19.] No LENGTH statement should be coded with these rule strings either.

Note that the precision of the value may be affected by the $DECIMAL proc. The assumption here is that if you enter a monetary figure, such as $96, you are being precise to the penny: $96.00. However, if you do not want to change the precision of the input value for storage, you may omit the $DECIMAL proc. The precision will be changed for output by the $EDIT proc.

Two System Procs for Testing Packed Decimal Values

Two other procs may be inserted into the INPROC strings after the $PACK proc that can be used to test the input value. The $PACK.TEST proc can be used to test the range of the packed decimal value; the $RANGE proc can be used to check the range of various parts of the packed decimal value, such as its precision or its magnitude. For example,

The above input string, used to process money values, tests that the input value is greater than or equal to zero, thus verifying that the value is not negative. If it is negative, the error flag is set, and the value is rejected.

Rule Strings for Fixed-Length Packed Decimals

Fixing the length of a packed decimal element means limiting the precision of the value. If you do fix the length, be sure to pick a size that is adequate for holding the largest number you expect and which allows the desired amount of precision for the value. The formula for doing this is repeated below.

To set a fixed length for a packed decimal, you can code statements like this:

where "length" is the desired length in bytes. To determine "length", you should figure out how many digits of precision you will allow in the packed value and then use the equation discussed previously:

with "length" rounded up to the next highest integer if it has a fraction. [See 18.1.2.]

Another set of rule strings can be used if you want the value to have a given number of decimal places and to have a fixed length:

where "length" is again the desired length and "places" is the number of decimal places desired. Using the formula above, the number of digits in the value is "(length*2)-3"; of these, "places" will be decimal places.

Rule Strings for Fixed-Length Monetary Elements

For a fixed-length monetary element, the statements might be like this:

This set is simply a combination of the monetary rule strings and the fixed-length "decimal" strings shown above.

Rule Strings to handle the Sorting Limitation

The main effect of the sorting limitation that SPIRES has with packed decimals is that in the four circumstances described at the start of this chapter, negative packed decimal values will not sort properly, but will in fact be intermingled with the positive values.

That means, for example, that packed decimal values to be indexed should probably not be negative. ("Probably" is used here because the index will work properly for searches that use the equality operator, such as "FIND TEMPERATURE = -15", but not for those using a range operator, such as "FIND TEMPERATURE > -15".)

Also, in order for the sorting and range searching of positive values to work properly, the values must have the same number of decimal places and the same length in bytes, both of which can be handled by the $DECIMAL.ADJ proc used above.

In many indexing cases, it is preferable to fix the length and the number of decimal places of the input value; that solves the problem for all four situations at once (as long as no values are negative):

and later, in the linkage section:

where "length" is the length of the packed value in bytes, "places" is the number of decimal places that the packed value will have, and "elemname" is the name of the element being passed to the index.

Notes on the rule strings above:

Though we talk about solving the sort problem, remember that it is solved only if none of the values is negative. If you will have negative values and one or more of the four situations described above arises, the element should probably be a floating point (real) element or binary element rather than packed decimal.

18.3  Packed Decimal Variables and Arithmetic

Packed decimal variables are frequently used in protocols, as well as in formats and USERPROCs within file definitions. They do not have the same sorting limitations that packed decimal elements may have, so they are perhaps easier to handle.

The easiest way to get a packed decimal variable is with a LET statement and the $PACK function:

Assuming that #PACKED is not a pre-allocated variable (see below), the length of the stored value (and hence the precision) depends on the given value.

In a compiled vgroup used in a format or USERPROC or wherever, a packed decimal variable always has a fixed internal length in bytes. Hence, the precision of the input value may be changed in order to fit the value into the given length.

In a vgroup definition, a packed variable is declared using the TYPE statement:

Any value assigned to this variable will be converted to a packed value.

Consider the impact of the variable length on the input value, however. Using the formula discussed earlier for determining precision from the storage length, we can determine how many digits of precision are allowed in a four-byte packed decimal value:

In other words, the variable WEEKLY.COST specified above can have only five places of precision:

In effect, the $WINDOW function is applied to the value, restricting it to five places of precision. Note that rounding may occur when digits of precision are lost (e.g., 2178.28 to 2178.3).

The default length for a packed decimal static variable is four bytes, which allows only five places of precision. The length can be changed by specifying the LENGTH statement in the vgroup declaration.

For consistency, you may want a variable value to have a set number of decimal places when it is used, but not when it is stored. Coding the DISPLAY-DECIMALS statement in the vgroup declaration gives the variable that many decimal places when it is displayed or processed in arithmetic operations:

For example,

The stored variable only has five places of precision, but for displaying the value with two decimal places, precision is added.

If you want to access the stored value of such a variable without going through the DISPLAY-DECIMALS processing, you could redefine the variable with another. See the manual "SPIRES Protocols", section 4.2.2.1, or EXPLAIN REDEFINE STATEMENT IN VGROUPS.

18.4  Arithmetic with Packed Decimal Values

When you add or subtract packed decimal numbers, SPIRES retains the precision of both operands. Here are some examples:

This is in contrast to floating-point arithmetic, where you get results such as:

instead of the packed arithmetic answer of "234.50".

If either of the operands is infinity (indicated visually by @@ or -@@), the result will be infinity.

For multiplication, SPIRES returns values with up to 30 significant digits, depending on the number of significant places in the operand. Again, precision is not haphazardly added or subtracted. That means, for example, that multiplying a value by 100 is not the same as multiplying it by 1E2:

Again, this contrasts with floating-point arithmetic:

Multiplying a value by infinity results in infinity (@@ or -@@).

Division can be slightly more complicated; because the precision of the result may be difficult to predict, the $PRECISION or $WINDOW functions are often used to set the precision of the result, or the $DECIMAL proc is used to limit the number of decimal places in the result. Here are some examples:

The number of significant digits in the result may be as high as 27.

Unlike the other types of operations, division will not leave zeroes at the right-end of the result past the decimal point. In other words, no zeroes will appear at the end of the integer when the exponent is negative:

The $REMAINDER function may be used to find the remainder in a packed decimal division operation. Consider this sequence of arithmetic:

In other words, the result of a division operation may be an approximation (though it is an approximation with 27 places of precision), and the remainder should be taken into consideration sometimes.

Note that division by 0 will give infinity (@@) as a result.

In if-tests of packed decimal values, the comparison SPIRES does is similar to subtracting the second value from the first one and comparing the result to zero. For example,

After assigning values to the two packed variables, they are tested for equality by an if-test. The two if-tests are mathematically equivalent. So if you give SPIRES an if-test similar to the first one above, where two mathematical values are being compared, SPIRES treats it similarly to the second. While 1E2 and 100.00 may not be identical values internally or externally, one subtracted from the other does indeed equal zero.

18.5  Handling Other Types of Numeric Data

Packed decimals in SPIRES are different from packed decimals in other computer languages, as pointed out in previous sections. [See 18.] Occasionally you may need to convert the SPIRES form into the standard form, for use in other programs. This can be done in a file definition or format (file or global) with the following OUTPROC string:

where "places" is the number of decimal places to be "overlayed" onto the standard packed value, and "length" is the desired length in bytes for the standard packed value. If all of the values in the SPIRES packed form already have the correct number of decimal places, the $DECIMAL proc is not needed. Similarly, the $ADJUST proc is not needed if the output values would end up being the proper length anyway (which is not likely unless they were padded on input and stored in fixed length).

Remember that a SPIRES packed value is just a standard packed value with an extra byte on the end to keep track of the decimal point. The $MAX.LEN proc drops that last byte. The $DECIMAL proc insures that the decimal point is in the same place for all values (i.e., that the last byte is the same for all values). The $ADJUST proc right-adjusts the value, using hex '00' as the padding character.

Zoned numeric input

An application using other programs (e.g., MARK-IV) may generate "zoned" numeric values for input to SPIRES. The following set of system procs will convert such input to character form, making acceptable to other SPIRES procs for input (such as $INT, $PACK, $DECIMAL, etc.):

INPROC = SQU/ $INSERT(' ',END)/ $CHANGE.LIST('#C040#,0+, #C140#,1+,
     #C240#,2+, #C340#,3+, #C440#,4+, #C540#,5+, #C640#,6+, #C740#,7+,
     #C840#,8+, #C940#,9+')/ $CHANGE.LIST('#D040#,0-, #D140#,1-,
     #D240#,2-, #D340#,3-, #D440#,4-, #D540#,5-, #D640#,6-, #D740#,7-,
     #D840#,8-, #D940#,9-')/ $SHIFT(1,RIGHT);

The effect is that zoned input is accepted and changed to character form, and character form is left unchanged. This makes it possible for you to use normal character form in WHERE-clauses or ALSO commands. Therefore, if "123J" is input, the result will be "-1231" since the "J" indicates the last digit is "1" and the value is negative. If the input were "-1231", the result would be " -1231", and the leading blank is stripped by numeric conversions.

If you wish to construct your own proc definition, something like the following is equivalent to the above set of procs:

    PROC = ZONED.INPUT;
    RULE = A40/ A36,' ',0/ A48,  #C040#,0+, #C140#,1+, #C240#,2+,
#C340#,3+, #C440#,4+, #C540#,5+, #C640#,6+, #C740#,7+, #C840#,8+,
#C940#,9+, #D040#,0-, #D140#,1-, #D240#,2-, #D340#,3-, #D440#,4-,
#D540#,5-, #D640#,6-, #D740#,7-, #D840#,8-, #D940#,9-/ A55:1,1;

Then, you would refer to the proc by something like:

    INPROC = ZONED.INPUT/ $PACK  ... etc.
or  INPROC = ZONED.INPUT/ $INT  .... etc.

19  Edit Masks

Edit masks provide a facility for formatting numeric data into a fixed length field. They are sometimes called picture masks because they are specified by creating a picture of the way the output should appear, using symbols or actual characters to represent how the data should be positioned within a field. Value formatting with masks is available in SPIRES through a function ($EDIT -- EXPLAIN $EDIT FUNCTION) and an action (A85 -- EXPLAIN A85 RULE, or system proc $EDIT -- EXPLAIN $EDIT PROC).

SPIRES edit masks are similar to the picture masks found in COBOL and PL/I. They operate on numeric values (integer, packed, or real) and the picture expresses the number as an integer or decimal quantity. They cannot be used to express character strings or exponential values.

The value returned from any EDIT operation is always exactly the size of the mask itself. This may include leading blanks if the mask is longer than is required to display the value. If the value is too long for the mask (that is, if the number of significant digits in the value exceeds the number of digit selectors in the mask), then the left-most, or most significant, places are truncated. If the mask allows fewer decimal places (i.e., places to the right of the decimal point) than there are in the value, then the right-most, or least significant, decimal places are discarded as necessary; note that the value is not rounded. (Use the $DECIMAL function or system proc for rounding. EXPLAIN $DECIMAL FUNCTION or EXPLAIN $DECIMAL PROC.)

Alpha characters used to make up a mask may be in upper or lower case. An improperly constructed mask, or one that contains undefined characters, causes a null value to be returned from the edit operation. A mask may be used to format up to 15 decimal places and up to 31 digits total. The mask itself may be longer if sign or insertion characters are used.

19.1  Numeral Representation: "9", "Z", and "*"

The characters "9", "Z", and "*" are "digit specifier" characters -- they are used for representing the numeric portion of a value. The character "9" is always replaced by a digit from 0 to 9. Using the "9" allows values to be displayed with leading zeros.

The characters "Z" and "*" are "zero suppression" characters. They behave just like the digit specifier "9", except that leading zeros in the value are replaced by spaces (for "Z") or by asterisks (for "*"). The asterisk is also known as a check protection symbol.

A mask may use "Z" or "*" for zero suppression, not both. One or more may be used to represent the left-most portion of the value, followed by as many "9"s as needed to complete the value. Or the entire value may be represented by all "Z"s or all "*"s to cause a value of zero to be displayed as all blanks or all asterisks.

Examples:

19.2  Decimal Point Indication: "."

A period is used in a mask to indicate the position of the decimal point in a value. Unlike PL/I, which has a separate decimal alignment character ("V"), SPIRES uses the implied decimal place from the numeric value and aligns the value in the mask to express a proper magnitude. A third parameter is defined for the $EDIT function and action which allow you to do implied multiplication or division by powers of ten. This has the effect of moving the assumed decimal place either to the right (multiplication) or to the left (division):

When zero suppression characters are used in a mask, they normally operate only up to the decimal point. That is, for a value less than one, the leading decimal point and zeros past the decimal point are retained. An exception is made for the character "Z" when the value is zero. The value returned in this case is all blanks.

19.3  Sign and Currency Symbols: "+", "-", "CR", "DB" and "$"

Without sign information, an edit mask displays the absolute value of a number only. Sign information can be expressed two ways, with the sign characters "+" or "-", or with trailing sign indicators "CR" or "DB". If a value becomes zero after decimal truncation, then it is treated in the mask as if the original value were positive.

A "+" causes either a plus- or minus-sign to be placed at that position in the edited value, depending on whether the value is positive or negative (this is like COBOL. PL/I uses the "S" character to indicate a sign is always to be displayed). A "-" used this way displays a minus-sign for negative values, but leaves the position blank for positive values.

The symbols "CR" and "DB" are used for credit and debit indication. They are placed at the end of the mask. The specified string, CR or DB, appears last in the edited result whenever the value is negative. Otherwise the value ends with two blanks.

The dollar-sign currency symbol ("$") can be placed at the start of the mask, either before or after a leading sign, to cause that character to be placed at that position in the edited value.

19.4  Floating Characters: "+", "-" and "$"

The plus ("+"), minus ("-"), and dollar ("$") symbols, if used singly, occupy the same column in the edited value as they do in the mask. Used this way they are static symbols. However, these symbols may be specified to "float" to the right by specifying a string of one of the symbols repeated for as many places as you would like the symbol to float. Except for insertion characters, the floating field must consist of the float character contiguous in the value. Also, the floating field must begin with at least two contiguous occurrences of the float character. If any but the left most occurrence of these symbols occupy a position that contains a number from the value, then they act like digit specifiers. Otherwise the single character represented in the floating field is placed as far to the right as possible up to a decimal point, according to the length of float in the mask.

Floating characters may not be used in combination with zero suppression characters "Z" or "*". Remember however, that in determining the position of a floating symbol, leading zeros are discarded, so that zero-suppression is taking place within the floating field.

As with the zero suppression characters "Z" and "*", floating characters may be carried beyond the decimal point. The symbol itself only floats up to the decimal point, but carrying the float field to the end causes the mask to return all blanks when the value is zero:

19.5  Insertion Characters: " ", "B", "0" and ","

The following characters can be specified in the mask and are used as literal values to be placed at that position in the edited result:

The character "B" can also be used to represent a blank.

When zero-suppression characters are used ("Z" and "*"), then both leading zeros AND leading insertion characters imbedded within the numeric portion of the mask are changed to blank or asterisks (depending on the zero supression symbol in used). Remember that zero suppression operates only up to a decimal point.

19.6  Formal Edit Mask Syntax

SPIRES treats the edit mask as an ordered set of fields:

Digit specification takes place in fields 5 and 6 only. Until significant digits are encountered, or until a decimal point, all characters in field 5 are overlaid by a "pad" character, which is * for *'s, 0 for 9's, and blank for the others. 9's begin significance with the first 9's character, so fixed insertion characters are not overlaid following the first 9's position. When field 5 begins with $$, ++, or -- the appropriate sign character replaces the blank preceding the first non-blank character in the edited result of this field.

Signs are replaced as follows: (leading, trailing, or floating)

The input value is adjusted for decimal point alignment to yield the value to be edited. If this value is zero AND there are no 9's in the mask, the entire edit mask is replaced by blanks except when floating * is specified, in which case all characters of the mask are replaced by *'s except the decimal point (if any).

The mask may not begin with a comma. The minimum allowed mask is either field 2 alone, or some combination of fields including field 5 and/or 6. Fields 1 and 3 may not both occur. Fields 2 and 8 may not both occur.

A pictorial view of allowed edit fields looks like this:

The "f" in field 6 means the floating character from field 5. The "B" represents a blank; either a B or an actual space may be used in the mask.

20  Dynamic Elements

This chapter explains "dynamic elements", a special type of element that is created by the user as needed, instead of by the file definer in the file definition. Dynamic elements are created by the DEFINE ELEMENT or DECLARE ELEMENT command, and are eliminated by the CLEAR DYNAMIC ELEMENT(S) command.

The first sections of this chapter explains these commands. [See 20.1, 20.2, 20.3.] The later sections show examples of how dynamic elements might be used. They can be used, for example, with record displays in the standard SPIRES format. [See 20.4.] They may also be used in formats [See 20.5.] and in WHERE clauses in Global FOR mode. [See 20.6.]

20.1  The DEFINE ELEMENT (DEF ELE) Command

When a subfile is selected, you can create record elements "dynamically", that is, elements whose values are created as a record is processed. Dynamic elements are similar in many ways to virtual elements: for example, neither type has values that are stored in the data base, but instead, the values are created as the elements are accessed. However, unlike virtual elements, which must be defined in the file definition by the file owner, dynamic elements may be created by anyone who can select the subfile. (Other similarities and differences are discussed later in this chapter.)

A dynamic element's value may be derived from one or more stored elements (or even virtual elements) or it may be derived from non-data-base values, such as the current date or time. It is usually tied to an element defined in the file definition, however; whenever that element has an occurrence, an occurrence of the dynamic element will also exist. (If it is not tied to a specific element, it is treated as if it were an element at the record level that occurred a single time.)

A dynamic element is usually defined by the DEFINE ELEMENT command:

The various pieces of an expression in a dynamic element definition (the #variable values, the strings, the accessed element values, etc.) are each truncated to 255 characters if necessary before the expression is evaluated. When the expression is evaluated (i.e., when the dynamic element is accessed), the resulting value is truncated to 255 characters if necessary. This is not true for DECLARE ELEMENT. [See 20.3.]

Syntax note: To define a dynamic phantom structure, the syntax of the DEFINE ELEMENT command is somewhat different. [See 23.3.]

You can also create a dynamic element (including dynamic phantom structures) by using the DECLARE ELEMENT command within a protocol. That feature gives you most of the capabilities of FILEDEF's element definition language, including the ability to have Userprocs. [See 20.3.]

Other commands useful with dynamic elements are:

which eliminates all defined dynamic elements (The CLEAR SELECT or SELECT commands will do the same);

which eliminates only the named "elem" as a dynamic element [Prior to this command, you cleared a single dynamic element with the command "DEFINE ELEMENT elem AS $ZAP".] and

which displays the definitions of all dynamic elements currently defined.

Both the SHOW and CLEAR commands may use the word DEFINED or DECLARED in place of DYNAMIC, but they are just aliases which affect both DECLARE and DEFINE elements.

Up to 56 dynamic elements may be defined for a selected subfile. Examples of and information about their use appear later in this chapter. [See 20.4.]

20.2  Secondary Elements as Multiple Occurrences of a Dynamic Element

A dynamic element may combine the occurrences of several different elements so that they are all treated as occurrences of that dynamic element. To request this feature, you must use the "FOR primary" option on the DEFINE ELEMENT command as follows:

where "primary" is the name or alias of the primary element, and "elem2", etc., are the names of secondary elements. The elements must be separated by commas. In processing commands that "retrieve" the dynamic element (e.g., DISPLAY), SPIRES will retrieve all the values of the primary element and process the "expression" for each of them, and will then retrieve the occurrences of "elem2" and process the expression for each of them, and so forth. Up to four secondary elements may be requested. The other options and parameters were discussed earlier. [See 20.1.]

Consider the following example:

A dynamic element such as the one shown above could be useful in several situations. For example, you could use it in a WHERE clause in Global FOR: FOR SUBFILE WHERE PERSON OCCURS > 4 could be used to find families with more than four members. Or, in the $REPORT format, the PERSON element could be used to place all the family members in a single column of the report.

The secondary elements must be in the same structure as the primary (or else all the elements must be record-level ones), and they should be of the same element type as the primary. Because SPIRES treats the secondary elements as extra occurrences of the primary one, the expression should be constructed for the primary, and does not need to include the secondaries. For instance, the DEFINE ELEMENT command in the example above does not need to mention the secondary elements in the "AS @@worker" part of the command. (If the given expression were "AS @@worker @@spouse @@child", the result would be the same as above except that all occurrences of PERSON would have "Nelson, HarrietNelson, David" appended to them, since SPIRES would retrieve the first occurrence of the SPOUSE and CHILD elements for the evaluation of the expression each time.)

Filters may be applied to the primary, secondary or dynamic elements as desired. That means that filters applied to the primary and/or to secondary elements will affect the occurrences of the dynamic element. For example, if a filter on a secondary element limits the element to only one occurrence, only one occurrence of it will be processed as an occurrence of the dynamic element.

20.3  Declared Elements: Dynamic Elements With Element Definitions

The DECLARE ELEMENT command, which can be issued only from within a protocol, gives you most of the capabilities of the element definition language to create a dynamic element. Unlike dynamic elements created with the DEFINE ELEMENT command, "declared" dynamic elements can have their own Inproc and Outproc rule strings, element information packets, and Userprocs.

The DECLARE ELEMENT command is followed on the next line of the protocol by one or more lines of the element definition, which is followed on a final line by the statement ENDDECLARE:

The "FOR primary" option can be specified if you want to tie the occurrence of the declared element to another element in the record-type. It works almost the same way as it does with defined dynamic elements except that here it actually provides the primary element's values as the internal values for the declared element. [See 20.1.] The REDEFINES statement, allowed in the element definition statements, will have the same effect. REDEFINES and the "FOR primary" option cannot both appear in the element declaration.

With the OCC option, you can request that a specific occurrence of the source element is to be used in generating the dynamic element. For the first occurrence, you specify "n" as 1; for the second, "n" as 2, etc.

Alternatively, with the "secondary" option added to the FOR option, you can request that additional elements in the goal record also be treated as occurrences of the primary to feed to the declared element. [See 20.2.]

You can specify "IN structure" if you want to tie the occurrence of the element to occurrences of the structure. The element will occur once for each occurrence of the structure specified. Again, REDEFINES and the "IN structure" option cannot both appear in the element declaration.

Element Definition Statements Allowed in Element Declarations

Below are the element definition statements allowed in element declarations; they are explained fully in the manual "SPIRES File Definition" (or in "SPIRES Technical Notes" in the case of phantom structures). They are all optional (though some may be required if others are present). The asterisks indicate those for which additional information appears following the list.

    * OCC = VARIABLE;
    * TYPE = elem-type;
      INPROC = processing-rule string;
      OUTPROC = processing-rule string;
    * REDEFINES = element-name;
      COMMENTS = comments;
      ELEMINFO-DEF;
        any element-information statements desired
    * PHANTOM;
        SUBGOAL = subgoal-record-name;
        SUBFILE = subfile-name;
        VIA = {LCTR|INTKEY|EXTKEY|TRANS};
      USERDEFS;
        VARIABLES;
          any variable definitions for Userprocs
        USERPROCS;
          any Userprocs desired
      EXTDEF-ID = gg.uuu.extdef.rec; - for externally defined procs

If you are defining a phantom element, then you must include a "TYPE=STR;" statement; that is the only allowed kind of structure you can define with DECLARE ELEMENT. You might also want to declare the type if no Inproc rule string (which could implicitly set the type) is included. For instance, if you redefine an integer element, you might want to include either a "TYPE = INT;" or an "INPROC = $INT;" statement; otherwise, SPIRES will convert the value to string (the default element type) before processing it through any Outprocs, which might not be what you or the Outproc rules were expecting. The basic types are: STRING, PACKED, REAL, INTEGER, HEX.

You may code an OCC statement, but only with the value VARIABLE, as shown above. This means the element has either one occurrence or no occurrence. The element occurs only if a SET VALUE Uproc is issued in a Userproc during the generation of the element. Otherwise, without this statement, the element always occurs once. Note these two details: First, the OCC = VARIABLE statement cannot be used when the REDEFINES statement is also coded for the element. Second, the OCC = VARIABLE statement here for a declared element is not as versatile as its use for a virtual element; in that case, the virtual element may occur more than once, whereas here it is limited to only one occurrence at max.

Note some of the statements that are NOT allowed: ALIAS and LENGTH. In an element definition, LENGTH has meaning only for elements that are physically stored in a file.

Examples of Declared Dynamic Elements

Here is a very simple example of a declared dynamic element, but one which takes advantage of benefits that declared dynamic elements have that are not shared with defined ones. This element concatenates an author and title element, and capitalizes the entire value:

If the AUTHOR element's value were "Joyce, James", and the TITLE was "Ulysses", the value of the dynamic element would be:

The element information is one added benefit of declared dynamic elements. This declared dynamic element may benefit from the absence of length restrictions that can cause problems for defined dynamic elements that have long text values. (See below.)

Differences Between Defined and Declared Dynamic Elements

In most cases, you can do the same thing with either a dynamic element created by DEFINE ELEMENT or one created with DECLARE ELEMENT. Some of the distinctions between the two have already been pointed out:

Other important distinctions:

However, because declared dynamic elements are a type of dynamic element, the total limit of 32 dynamic elements includes them as well. You can see how many declared dynamic elements are set, as well as their names, with the SHOW DYNAMIC ELEMENTS command, which also shows the definitions of defined dynamic elements. [See 20.1.]

Lastly, Inproc rules are basically used to declare element type. They are executed when $GETIVAL accesses these elements, and when the "FOR element" is retrieved from a record during Where-clause processing, similar to Virtual Element processing. But they are NOT executed when the relational expressions of a Where-clause are initially processed. Instead, the associated values in the Where-clause are converted by simple TYPE conversion rules depending upon the TYPE of the declared dynamic element.

20.4  Using Dynamic Elements

Dynamic elements may be used in most any place other elements can be; in particular, they are available in the following situations:

They may not be used:

The various pieces of an expression in a dynamic element definition (the #variable values, the strings, the accessed element values, etc.) are each truncated to 255 characters if necessary before the expression is evaluated. When the expression is evaluated (i.e., when the dynamic element is accessed), the resulting value is truncated to 255 characters if necessary.

Note that all secondary @element and @@element values are retrieved along with system variables ($varname) only at the start of the structure (or of the record, if the primary is not in a structure) that contains the primary. The expression is then evaluated for each occurrence of the primary until the structure is exhausted. Thus, though secondary elements may occur multiple times within the structure, only their first occurrences are retrieved.

Here are some examples of dynamic elements, created for the RESTAURANT subfile:

The above example shows a common usage of dynamic elements: to examine the internal, or stored, form of an element. The FOOD-QUALITY element has an A46 ($ENCODE) processing rule that converts allowed values, such as "Excellent", to an integer. Here the two dynamic elements show the stored value for FOOD-QUALITY in two forms: FOOD.INTERNAL shows the stored value converted from integer to string, and FOOD.HEX shows the hexadecimal representation of the stored value. Hence, dynamic elements make it very simple to see the unconverted (i.e., not processed through OUTPROCs) values of an element.

Another common use of dynamic elements is to provide a means of converting the element value into a different form by processing it through SPIRES system functions. Here is another example, continuing our session above with the RESTAURANT subfile:

The FOODSTARS element provides a graphic display of the FOOD-QUALITY (alias FOOD) element, which internally is stored as a value from 1 to 4. Specifically, the definition of FOODSTARS requests the first "n" characters of the character string "****", where "n" is the number stored for the element FOOD. (The external value of FOOD, represented by @@FOOD in a dynamic element definition, would not work, since the external form is a character string, such as "Excellent". It is important to know which form of an element to use in a dynamic element definition.)

You can create a dynamic element that represents or processes the value in a system variable. For example, you could use the $DATE and $UDATE variables (the converted and unconverted forms of the current date respectively) in conjunction with the DATE-UPDATED element (alias DATEUP) to find out how many days it has been since a record was updated:

The dynamic element TODAY represents the current value of the system variable $DATE. The element DAYS represents the number of days since the record was last updated, using the $DAYS function, which returns an integer representing the number of days the given date is after the arbitrary date of 1/1/0000. (Note that arithmetic is allowed in the "expression" of the dynamic element.)

Another practical use of dynamic elements is to concatenate element values together. For example, the CITY and STATE elements are joined below, separated by a comma and a blank:

Note that the STATE and CITY.STATE elements are indented because they are in the same structure as CITY.

The functions $ELEMTEST, $ELNOTEST and $ELEMTEST may be used with dynamic elements; note that if the TYPE parameter is used, as in $ELEMTEST(DYNELEM,TYPE), the result will always be DYN, regardless of the type specified in the TYPE option of the dynamic element definition.

20.5  Dynamic Elements with Formats

Dynamic elements may only be used within formats by means of a statement such as:

where the #variable contains the name of the dynamic element. The variable containing the name of the dynamic element will be accessed during format execution.

The system variables $CVAL and $UVAL will always be the string form of the evaluated expression from the dynamic element definition.

20.6  Dynamic Elements and WHERE Clauses

Dynamic elements can be very useful in WHERE clauses within Global FOR mode.

Several forms of WHERE clauses with dynamic elements are supported. In all of those shown below, "dynelem" refers to the name of the previously defined dynamic element:

That is the basic form of a WHERE clause. The value will be converted to the type of the dynamic element, which was specified in the DEFINE ELEMENT command. [See 20.1.] Also, each value of the dynamic element resulting from the evaluation of the dynamic element's expression will be converted to the specified type for comparison to the "value".

The OCCURS or LENGTH operator is also allowed:

Remember that for OCCURS, the number of occurrences of the primary element in the dynamic element definition controls the number of occurrences of the dynamic element.

Inter-element relationships may also be expressed, as long as the dynamic element precedes the relational operator:

where "elem" is the name of the element to which the dynamic element is to be compared. The type of "elem" must be the same as the type of the dynamic element.

In any of the above forms, "same structure" processing of the dynamic element is not allowed. That is, an at-sign (@) preceding "dynelem" is ignored.

Here is a simple example of WHERE clause processing using one of the dynamic elements defined in an earlier section:

Since no type was specified in the dynamic element definition, the dynamic element is considered a string element.

Here is an example where the TYPE option on the DEFINE ELEMENT command is necessary if proper WHERE-clause handling is to occur:

The first definition for DAYS did not declare the element as an integer, meaning it was considered a string. The CHEZ PORKY record fit the criteria because the string "23" sorts after the string "100". When the element definition was corrected to specify that the element should be considered an integer value, the integer 23 did not sort after the integer 100, so CHEZ PORKY was skipped.

As the second example shows, the TYPE option on the DEFINE ELEMENT command may not be necessary when you are displaying element values only. However, if you want to use a dynamic element in a WHERE clause, be sure the proper type is specified for comparison.

20.7  Errors in Using Dynamic Elements

Several error messages may result during the definition of a dynamic element. They include:

Other errors may occur during evaluation of a dynamic element, usually when an attempt is made to use a dynamic element in a forbidden situation, as in a $PROMPT format record display. An error message will result.

If an element accessed in the expression does not occur, the value returned for it will be null. In other words, if the element X is defined as @Y @Z and a record has no occurrence of Z, the value of X will consist solely of the internal value of element Y. (Remember that if Y did not occur, then the dynamic element X would not occur, because element Y is the first element in the expression, and is thus the primary.)

You may change the definition of an already existing dynamic element, but it is not advisable to change the primary element, especially if the new primary element is in a different structure than the old one. If you do this, errors may occur. It is better to clear that dynamic element before redefining it.

21  Element Filters

Introduction to Element Filters

When you retrieve data in SPIRES, you nearly always begin by "filtering" the data, generally by suppressing the display of data that you do not want to see. For example, on the subfile level, by using a search command such as FIND, you are in effect asking SPIRES to filter out any records in the data base that fail to match your search criteria.

In much the same way that search commands filter data on a subfile level, the element filters described in this chapter filter data on the record level, letting you display (or scan or, in an input format, retrieve for updating) only element occurrences that fit the criteria you name.

For example, suppose you have a subfile called SUPPLIERS, containing records of companies from which you order equipment. Each record in SUPPLIERS contains the supplier's name, as well as a multiply-occurring structure, where each occurrence describes a particular order. If you retrieve all records with a given order date and display them without filters, you'll see all occurrences of the order structure for each record, including occurrences that you don't care about:

However, by setting a display filter with the SET FILTER command, you can narrow the display to occurrences with a particular date: e.g., you can display orders placed in 1985, filtering out orders placed in other years:

In addition to limiting the display of element occurrences after a DISPLAY or TYPE request, filters can also be applied to the merging of data in a merge input format, and to the scanning performed by a WHERE clause in Global FOR.

The rest of this chapter discusses the SET FILTER command in more detail:

21.1  The SET FILTER Command

The SET FILTER command has the following syntax:

Note: Because the OVERLAY option has such significant impact on the remainder of the syntax, it is treated as if it were part of a separate command, SET FILTER OVERLAY, discussed in the next section. [See 21.1.1.]

The "type" option names the activity in which filtering should take place. Currently there are three possible values for "type-list", though you can name more than one of them (or all three) as part of a single SET FILTER command:

You should explicitly name multiple "types" -- e.g., SET FILTER (SCAN, DISPLAY)... -- when you want more than one type to be in effect.

Filters of type MERGE filter the merging of element values into a record, during input through a merge format. [See the manual "SPIRES Formats" for details on this option.] SCAN-type filters ensure that element occurrences retrieved by a Global FOR command are bound together on the same structural path. For more information on this option, see Section 4.4 of the SPIRES manual on Global FOR, or online, [EXPLAIN FILTERS, ON WHERE CLAUSES IN GLOBAL FOR.]

"Elem.name" is the name of the element being filtered; any element can be filtered, including a virtual or dynamic element. [See 20.] The element to be filtered can also be a structure, as in the example in the previous section.

The "(occ)" option specifies which occurrences of the element named should be processed by subsequent commands, after the WHERE clause on the SET FILTER command has finished filtering the record. The "IN limit" option specifies which occurrences should be examined by the SET FILTER command's WHERE clause in the first place. Thus one of these options limits occurrences before the WHERE filtering occurs, while the other option limits occurrences after WHERE filtering has occurred. These two options are discussed in detail in a section of their own. [See 21.4.]

The SEQUENCE option lets you specify how (or whether) the element values that pass filter constraints should be sequenced:

If the filtered element is a structure, the SEQUENCE option may name elements (one or more) within the structure by which the structure should be sequenced. If the filtered element is not a structure, only the element itself may and should be named in the SEQUENCE clause. The (D) option causes filtered values to be sequenced in descending order, while the (X) option causes filtered values to be sequenced according to their external form, in those cases where the internal and external form differ. [Compare the syntax of the SEQUENCE command, described in the "SPIRES Searching and Updating" manual.]

"WHERE clause" is a clause following the same rules as a WHERE clause in Global FOR, such as the clause "where date.ordered = 1985" in the example in the previous section. [See 21.] [See the SPIRES Global FOR manual for more information on WHERE clauses.] Among other uses, the "WHERE clause" option on the SET FILTER command lets you filter an element's occurrences according to each occurrence's value. Again, if the filtered element is a structure, the WHERE clause may name elements (one or more) within the structure that should be examined to determine the appropriate filtering. If the filtered element is not a structure, only the element itself may be named as the element to be examined in the WHERE clause (although other elements may be named for comparison using inter-element relations; see the Global FOR manual).

Examples of Simple Filters for Display

Consider again the SUPPLIERS subfile where each goal record contains the name and address of a supplier, as well as information about each ORDER (a structure) placed with it. Unfiltered records in the $REPORT format might look as follows:

But if you set a filter you can get rid of those extraneous orders for pre-1984:

Furthermore, with the SEQUENCE option, you can ask SPIRES to sequence occurrences that have passed the filter:

21.1.1  Setting Additional Filters: The SET FILTER OVERLAY command

Each time the SET FILTER command is issued, it replaces all existing filters for the named element. In some situations, you need to add more filters to those that already are in effect for an element. For example, if a structure already has a display filter in effect, it is possible that you need to set a merge or scan filter that's different from the display filter; yet you don't want to replace the display filter either.

The OVERLAY option on the SET FILTER command gives you that flexibility. If you use the OVERLAY option, several other options on the SET FILTER command cannot be used; for that reason, you may wish to think of the SET FILTER OVERLAY command as a different command than SET FILTER. [On the other hand, in most other ways, the commands are the same; general references to SET FILTER in this SPIRES manual and others would also apply to SET FILTER OVERLAY without mentioning it explicitly, which is a good reason to think of OVERLAY as an option on SET FILTER.]

The SET FILTER command with the OVERLAY option has this syntax:

The "type" option names the activity in which additional filtering should take place. Currently there are three possible values for "type-list", though you can name more than one of them (or all three) as part of a single SET FILTER command:

You should explicitly name multiple "types" -- e.g., SET FILTER OVERLAY (SCAN, DISPLAY)... -- when you want more than one type to be in effect.

Filters of type MERGE filter the merging of element values into a record, during input through a merge format. [See the manual "SPIRES Formats" for details on this option.] SCAN-type filters ensure that element occurrences retrieved by a Global FOR command are bound together on the same structural path. For more information on this option, see Section 4.4 of the SPIRES manual on Global FOR, or online, [EXPLAIN FILTERS, ON WHERE CLAUSES.]

"Elem.name" is the name of the element being filtered; any element can be filtered, including a virtual or dynamic element. [See 20.] The element to be filtered can also be a structure, which is quite common.

"WHERE clause" is a clause following the same rules as a WHERE clause in Global FOR, such as the clause "where date.ordered = 1985" in the example in a previous section [See 21.] and in the examples that follow. [See the SPIRES Global FOR manual for more information on WHERE clauses.] Among other uses, the "WHERE clause" option on the SET FILTER command lets you filter an element's occurrences according to each occurrence's value.

Note that the "(occ)", SEQUENCE and "IN limit" options of the SET FILTER command are not available when the OVERLAY option is in effect. If you need to use them, they must appear on the first SET FILTER command for the element, not on subsequent SET FILTER OVERLAY commands.

Consider again the SUPPLIERS subfile used in the previous sections. Each record contains the name and address of a supplier, as well as information about each ORDER (a structure) placed with it. Let's use an example at the end of the last section as the start of our filter-overlay work:

Next, set a filter to get rid of those extraneous orders for pre-1984:

Next, set an overlay filter to filter out any orders where the item name contains the letter "a":

In the example, SPIRES first processes the ORDER structure through the first filter; any occurrence that passes that one then gets processed through the second.

An important aspect of filters that the example above shows is that they will not remove records from the display; they will only remove occurrences of the element or structure being filtered. Thus, although Good Man Hardware had no occurrences of the ORDER structure that fit the two filter criteria in effect, it still appears in the display, albeit with no occurrences of ORDER shown.

Two other considerations regarding overlay filters: Overlay filters are particularly useful in situations where you are not sure whether the user already has other filters in effect; you can add and later remove the overlay filters while you control the session without interfering with any already set. [See 21.3.]

Also, remember that overlay filters may be of a different type from other filters already set for the element (scan instead of display, for example).

21.2  Filter Capabilities and Limits

All element filters in SPIRES have the following capabilities and limits:

In addition, filters for display affect the following commands: [Warning: in Partial FOR, when the element named in the "FOR element" command is the same as the element named in the SET FILTER command, filtering may fail to take place.]

In general, filters do not affect the following commands:

21.3  Showing and Clearing Filters

To see what filters are currently in effect for the goal records, you can issue the SHOW FILTERS command.

To clear one or more filters, use the CLEAR FILTER command:

To clear all filters currently in effect, issue the CLEAR FILTERS command.

The LAST option indicates that the last filter set that matches any other criteria in the CLEAR FILTER command should be cleared. For instance, you can clear the last overlay filter set for a particular element, or the last filter set.

To clear all overlay filters, issue the command CLEAR FILTERS OVERLAY; any filters set without the OVERLAY option will remain in effect. [See 21.1.1.]

The "type" option, with the values in parentheses, lets you limit the filter clearing to particular types of filters: display, scan or merge. To specify more than one, separate them with commas, as in "(display, scan)". [See 21.1.]

The "FOR elem-name" option lets you specify a particular element with a filter or filters you want to clear.

Any of the options may be used in combination with the others. Note that the "S" option, which changes FILTER to FILTERS, really has no significant effect; CLEAR FILTER is the same as CLEAR FILTERS.

The following segment of a terminal session shows how these three commands could be used:

21.4  Filtering Elements by Occurrence Number

Two options already mentioned for the SET FILTER command, "IN limit" and "(occ)", cause specific occurrences of an element -- for instance, the fourth and fifth occurrence of an element that occurs five times -- to be filtered. These two options never filter an element occurrence based on the occurrence's value (as a WHERE clause sometimes does, e.g., "where date.ordered = 1985"), but only filter according to the sequential order of the occurrence in the record.

The "IN limit" option names which occurrences of the element should be examined for WHERE clause compatibility. Only those occurrences of the element that are named by the IN clause will be processed any further; in effect, those that are not named are eliminated by this option before the WHERE clause even takes effect.

"Limit" of "IN limit" may specify any one of the following values:

"Occ" of "(Occ)" can have any one of the values shown on the chart above. So, for example, in the command below, the WHERE clause would not even examine the third and following occurrences of the order structure, because "IN 1/2" limits the scan to the first two occurrences only:

The option "(occ)" specifies which of the element occurrences that passed the "IN limit" and WHERE filters should be processed. Note that, in contrast to "IN limit", the "(occ)" option only takes place after the WHERE clause filtering. (Without a WHERE clause, "IN limit" and "(occ)" options would have the same meaning.)

So, for example, after the WHERE clause below has filtered occurrences of the order structure to find keypad orders, the option "(1/2)" would cause subsequent display requests to show only the first two keypad orders, suppressing any orders beyond these first two:

One small warning: the order of the syntax on the SET FILTER command does not really reflect the conceptual order in which these options would take place. The "(occ)" option precedes the "IN limit" option and the "WHERE clause" option in the command syntax, even though conceptually it takes place later.

Examples of Filters by Occurrence Count

Here are some examples of the use of the "IN lim" and "(occ)" options, using the SUPPLIERS example again. First, we will add the "IN lim" option to the filter set before:

The example demonstrates that only the first two occurrences of the ORDER structure were examined for DATE.ORDERED values of 1984. The other occurrences were filtered out. Only the record for the 3-D House of Beauty contained a 1984 DATE.ORDERED in the first two occurrences of the ORDER structure. (Note that the other two records are still shown, because it was the ORDER structure that was filtered, not the entire record.)

Compare the above example to the next one, where the "(occ)" option replaces the "IN lim" option:

The SET FILTER command above specifies that the first two occurrences of the ORDER structure that have DATE.ORDERED values for 1984 should be processed for each record.

Next, here is an example showing the "(occ)" option used by itself:

Only the first two occurrences (1/2) of the ORDER structure in each record were processed. They were arranged in descending order by the DATE.ORDERED element.

21.5  Filters and Dynamic Elements

The SET FILTER command may be applied to dynamic elements. [See 20.] You can, for example, treat multiple occurrences of an element as if they were separate elements, which can be useful in formats such as $REPORT. This section will demonstrate some of the diverse uses of this combination of SPIRES features. These examples should be helpful when working out similar types of display problems yourself. (Warning: Not all uses of dynamic elements and filters need to be this complicated.)

The example below, using the SUPPLIERS subfile one last time, shows how you could get a report of orders arranged sequentially by date. Extraneous information (i.e., other orders from the company that are not relevant at that point in the report) are filtered out. One of the effects of this is to make it appear that each DATE.ORDERED element is the "key" to a separate record rather than part of a multiply-occurring structure; that is, it creates the appearance of a flat file.

As the example above shows, it can sometimes be useful to treat separate occurrences of an element as if they were separate elements. Suppose you are creating a report using the BLOOD DONORS subfile:

In the first display, records having two occurrences of the PHONE.NUMBER element needed two lines of the display. In the second, with the ALIGN value for the ADJUST parameter in $REPORT, the occurrences were joined together with a blank, but now they appear to be a single occurrence of the PHONE.NUMBER element.

Here is another way, which uses dynamic elements and filters to seemingly create two elements from one:

Both dynamic elements have the same definition. However, the two filter commands effectively "redefine" them, telling PHONE1 to filter out all occurrences except the first one, and PHONE2 all but the second.

Handling separate occurrences of an element as separate elements thus allows you to position them independently in a $REPORT format. Combined with the capability of multi-row positioning, you can place separate occurrences of an element practically wherever you want.

In another situation, you might want to graphically display ranges for a given element. Suppose, for example, you want to place restaurants in categories by their average prices, using a low-medium-high division:

The element PRICE is stored as an integer (representing the value in cents). The dynamic elements LOW, MEDIUM and HIGH are defined equivalently to the element PRICE in its internal form -- they are declared to be integers too, for proper handling in the WHERE clause of the SET FILTER commands. The three SET FILTER commands define the range allowed for each dynamic element; if the value of PRICE does not fall into a given range, that dynamic element is filtered out for the given record. (The $REPORT format applies edit masks to the values to display them in the proper dollars-cents format.)

In that example, the element being redefined dynamically was singly occurring: a restaurant has only one average price. However, suppose the element were multiply occurring and you wanted a count of the number of values in each range for each record. Similarly, suppose you had restaurants from several cities, not just San Francisco, and you wanted to know the number of restaurants in each that fell into each of the price categories shown above:

The same dynamic elements and filters are established as before. Three summary elements, which count the number of occurrences of each of the dynamic elements, are established in some of the SET FORMAT commands (e.g., DEFINE VALUE NO.LOW AS COUNT LOW). The search result is sequenced by city and then displayed in summary mode, which eliminates the detail lines and displays only summary information by city. Hence, Menlo Park has one high-priced, one mid-priced, and fourteen low-priced restaurants represented in the RESTAURANT subfile. (The bottom line, displaying column totals, shows that five of the 93 restaurants did not have a value for PRICE.)

22  IF-Testing in Record Input

Note: This facility is primarily for use by SPIRES system programmers, though it may be useful to SPIRES users in certain applications.

This facility allows you to control whether parts of element values, entire element values or entire records are added to a subfile, using IF-testing of statements within the input.

$IF-testing (a la PL360) is now allowed in FASTBILD, SPIBILD and SPIRES for record input (either formatted or unformatted). This mode of input is enabled by a new command: SET IFTest, and disabled by SET NOIFTest. Flags may be initialized with a command of the form:

This facility recognizes a set of "directives" in the input record stream (e.g. the ACTIVE file). These directives must be in uppercase and start in column 1 of a separate input line. The following directives are recognized:

Ignored columns may be used for comments. An example follows:

This facility can also be used to advantage in SPIBILD and FASTBILD to skip or include input lines (even entire records) under certain conditions expressed by the setting of the flags. However, care should be taken if the data is likely to contain the strings "$SET", $RESET", "$IFT", "$IFF" and "$END" at the start of a line (column 1).

23  Phantom Structures

(Note: Most of the material in this chapter will eventually be incorporated into the manual "SPIRES File Definition".)

"Phantom structures" are a major extension of SPIRES' subgoal capabilities. A record-type (call it REC02) containing keys or locators to another record-type (say, REC01) may contain a phantom structure, each occurrence of which is actually a record in REC01 that is pointed to, not just its pointer. The two record types may be in the same or different files.

Phantom structures provide a way of creating subgoal connections in the file definition itself. The goal records in one record-type appear as structures in another. They are tied together as "virtual hierarchies", "virtual" in that the records of one record-type, though accessible from the second, are not redundantly stored there. End users may have the choice of using the record-types together for searching and retrieving data or using them separately to make updating convenient.

The next section will explain the file definition statements necessary for creating phantom structures, including several examples of typical uses. [See 23.1.] The following section will enumerate the capabilities and limitations of phantom structures. [See 23.2.]

Although phantom structures are most commonly and most efficiently created in the file definition itself, they may also be defined dynamically, i.e., as long as the current subfile is selected, through a special form of the DEFINE ELEMENT command. [See 23.3.]

23.1  Coding Phantom Structures

This section will explain the statements necessary in a file definition to create phantom structures. [Keep in mind that you can also create "dynamic" phantom structures that do not require file definition changes. [See 23.3.]] There are two types of phantom structures: "subgoal" phantom structures, where the connected record-types are both in the same file; and "subfile" phantom structures, where the record-type whose records become the occurrences of the phantom structure is the goal record-type for a subfile, which may or may not be part of the same file.

Subgoal Phantom Structures

A subgoal phantom structure is coded as a virtual element that redefines an element whose value points to the subgoal record. It is often coded as simply as this, where the locator element in REC02 is redefined by the virtual element SUBLCTR, the phantom structure:

    (1)    RECORD-NAME = REC02;
    (2)      FIXED;
    (3)        KEY = DATE; INPROC = $DATE; OUTPROC = $DATE.OUT;
    (4)      OPTIONAL;
    (5)        ELEM = POINTER;
    (6)          TYPE = LCTR;
    (7)      VIRTUAL;
    (8)        ELEM = SUBLCTR;
    (9)          TYPE = STR;
   (10)          REDEFINE = POINTER;
   (11)      STRUCTURE = SUBLCTR;
   (12)        PHANTOM;
   (13)          SUBGOAL = REC01;
   (14)          VIA = LCTR;

The first six lines are a regular index record definition; the last eight are the lines added to create the phantom structure SUBLCTR. The virtual element SUBLCTR redefines the POINTER element, which contains the pointer to REC01. SUBLCTR is defined as a structure (line 9), whose definition is given in lines 11 to 14: each occurrence of POINTER means one occurrence of SUBLCTR, which consists of one record from REC01.

The VIA clause tells SPIRES how to interpret the redefined element value (POINTER, in this case) in order to retrieve the phantom record. Possible values are:

How can this facility be used? The simplest way would be to select a subfile for which REC02 is the goal record (or attach it with the ATTACH command), set the virtual element and then display some records:

Two records of REC01 become occurrences of the phantom structure SUBLCTR for a particular REC02 record, even though they are not stored in REC02. The redefined POINTER element tells SPIRES which records in REC01 to retrieve for the phantom structure.

Note that the structure SUBLCTR had to be set because it is a virtual element -- virtual elements are not displayed by the standard SPIRES format unless they are explicitly set (or named in a TYPE command). Note too that when phantom structures will be defined, it is a good idea to choose different element names (or add different aliases if two elements must have the same name) to avoid confusion. For instance, it might be preferable to rename the DATE element in REC02 as INDEXED.DATE, so that it will not be confused with the DATE element in REC01.

Subfile Phantom Structures

Subfile phantom structures let you retrieve the goal records from another subfile through phantom structures if you have the keys to those records in the current goal record-type.

Suppose you have two subfiles, one with records about automobile drivers and the other with records about their cars. The two subfiles have the following elements and indexes:

The goal record definition for the CARS subfile could include a phantom structure using the common element DRIVERS.LICENSE:

Instead of a SUBGOAL statement in the PHANTOM structure definition, a SUBFILE statement, naming the appropriate subfile, appears. Users trying to use the phantom structure must have access to the named subfile in order to retrieve records from it -- you cannot use phantom structures to circumvent subfile security constraints.

The subfile name given in the SUBFILE statement may be preceded by the "&filename" prefix in case the subfile name is not unique. [See 2.]

The VIA statement has the same possible values as explained above for subgoal phantom structures.

This file construction allows you to search two subfiles simultaneously. For example, you could look for people who have blue eyes and blue cars, as shown below. This example takes advantage of phantom structures in WHERE clauses. [See 23.2.]

Subgoal capabilities are also available through formats. Besides allowing the use of phantom structures in WHERE clauses, the advantage of defining the subgoal in the file definition is that it may be used without a custom format having to be created: you can take advantage of phantom structures with the standard SPIRES format, as shown above, and in some of the system formats, such as $REPORT.

23.2  Capabilities and Restrictions of Phantom Structures

Many SPIRES tools can work with phantom structures. This section will discuss some of them, showing how they can be useful. First, however, here is a list of current phantom structure restrictions:

Some of these restrictions may well be removed in the future, in response to specific user needs.

Here are some of the areas in which phantom structures are useful:

Global and Partial FOR Where Clauses

Phantom elements may be used wherever regular elements can be used in WHERE clauses, including "same structure" processing and "inter-element relations" processing. In the latter case, however, the elements must all be in the same record-type. [See 23.1 for an example of their use with WHERE clauses.] If you need to compare an element in the primary record with one in the phantom structure, you may be able to define a dynamic element that uses $LOOKSUBF or $LOOKSUBG to retrieve the phantom element and then use the dynamic element (only on the left-hand side of the inter-element relationship) in comparison with the primary record's element.

Element Filters

Both the data element being filtered and the elements within any filter's WHERE clause may be phantom structures or elements. The restriction noted above for inter-element relations applies to filter WHERE clauses too.

Here is an example of a valid SET FILTER command:

Phantom Structures and the $GETxVAL Functions

You can use the $GETxVAL functions ($GETCVAL, $GETXVAL, $GETUVAL and $GETIVAL, which retrieve "unconverted" and "converted" element occurrences from a referenced record) to retrieve element occurrences from within a phantom structure. However, you cannot use the "structural occurrence map" parm on those functions to navigate within nested structures in the referenced record. Note that to retrieve multiple phantom structure elements it is more efficient to write a custom SPIRES format than to use these functions multiple times. See the manual "SPIRES Protocols" for more information on these functions, or EXPLAIN them online.

The SET ELEMENTS Command

The SET ELEMENTS command may be used to set the phantom structure for display in the standard SPIRES format. It may also be used to set individual phantom elements within a phantom structure as well. Similarly, the entire phantom structure or elements within the phantom structure may be named in the "element-list" option of the TYPE command. If individual elements of a phantom structure rather than the entire phantom structure are set, the SHOW ELEMENTS SET command will list those elements separately under a heading that names the phantom structure, e.g.,

In this example, three elements are set: NAME and ADDRESS in the goal record, and AMOUNT in the phantom structure PLEDGE.RECORD.

All other options of the SET ELEMENTS command work with phantom structures too. You can, for example, add or subtract phantom structures or individual elements within them from the set element list with the "SET ELEMENTS + elements" or "SET ELEMENTS -- elements" command. You may also use the "SET ELEMENTS ALL + elements" version of the command, to set all the non-virtual elements of the goal record (the ALL portion of the command) plus any additional virtual elements, including phantom structures or their elements.

If you specify a name that is both the name of an element in the main goal record and an element within the phantom structure, the element in the goal record will be retrieved. To retrieve the element in the phantom structure in such a case, precede the element name with the name of the phantom structure, followed by the "@" sign. For instance, to set the AMOUNT element in the PLEDGE.RECORD phantom structure if there is an AMOUNT element in the goal record as well:

Alternatively, of course, you might find that the AMOUNT element in the phantom structure has a unique alias that does not match any in the goal record, which you may use instead of the primary mnemonic.

The SEQUENCE Command and SPISORT

Records in a search result or stack may be arranged in order by the values of elements within phantom structures, using the SEQUENCE command, e.g.:

The list of element names following the SEQUENCE command verb may contain phantom elements.

Phantom elements may also be used as sorting criteria when the SPISORT procedure is used. That is, phantom elements may be named when you create a SPISORT input file with the DEFINE SET command, e.g.:

The $ELEMTEST and $ELIDTEST functions

A phantom structure or one of its phantom elements may be named in the $ELEMTEST function. Besides the value returned from the function, the system variable $ELEMID will also be set, which can in turn be used, for example, in the $ELIDTEST function. In other words, $ELEMID can exist for phantom structures and their elements.

Formats

Formats can also take advantage of phantom structures. Unlike other subgoal processing in formats, phantom-structure subgoal processing does not require you to retrieve the pointer or key and establish it in a VALUE statement in the calling label-group. Instead, you basically treat the phantom structure as you would any other structure.

Complete coding details are provided in the manual "SPIRES Formats".

For the system format $REPORT, individual elements within a phantom structure may be named for display. The same rules discussed earlier under "The SET ELEMENTS Command" apply here when an element name in the phantom structure matches the name of an element in the goal record.

23.3  Defining Phantom Structures Dynamically with DEFINE ELEMENT

You can also create a phantom structure using a variant on the DEFINE ELEMENT command. The phantom structure is not permanently defined, as it is when you create it in the file definition, but it does last as long as your current subfile is selected.

Here's a situation where dynamic phantom structures are handy. Suppose you are directly examining index records for a subfile:

You could next connect to the actual goal record of the subfile identified by the POINTER element by dynamically defining the goal record as a phantom structure:

This could be useful, say, in a debugging situation where you need to compare element values in a goal record with what is actually indexed for them. But there are as many reasons for using them as there are for normal phantom structures.

The main advantage of dynamic phantom structures is that you do not need to change and recompile the file definition to create them. That means that people who can't change the file definition can create them for themselves, and even those who can change the file definition can create a phantom structure quickly for one-time use without making permanent changes to the file definition.

To create a dynamic phantom structure, first select the subfile or attach the record-type that has the primary set of records you want to use. Next, issue a DEFINE ELEMENT command, following the special syntax below, that identifies the subgoal or alternate subfile that you want to use as the phantom structure and establishes the linkage to it:

 DEFINE ELEMENT elemname {SUBFILE subfile-name|SUBGOAL rec-type}...
  ... [VIA via-type] [FOR primary] [TYPE type] AS expression

where:

Once you have used the DEFINE ELEMENT command to establish the phantom structure, you can use the structure in the same ways you would use a compiled one. [See 23.2.]

24  Temporary SPIRES Files

A "temporary file" in SPIRES is a file with a single record-type whose data is stored only for the duration of the current SPIRES session. It may be useful as a way for applications to store data about the current session for use during it. Or it may be useful for building smaller subsets of a subfile's goal records for faster manipulation of just the data you need. Or it may be useful for creating utilities that take output from SPIRES or non-SPIRES command, store the data as individual records, and re-interpret the data using SPIRES tools such as the $REPORT format.

You define a temporary SPIRES file simply by creating a goal record definition, which you add to the RECDEF subfile and compile. See the manual "SPIRES File Definition", chapter C.7.1 for information about RECDEF records; online, [EXPLAIN RECDEF.] It may have all the features of any record definition, except that the file will have no other record-types (meaning, among other things, that the file will have no indexes), so it can contain no same-file subgoal links. [Well, all right, that was a slight exaggeration; in fact, the record definition for the temporary file can be defined with a phantom structure that subgoals to itself. You define the phantom structure like this: "VIRTUAL; ELEMENT = phantom.name; REDEFINES elem.name; TYPE = STRUCTURE; STRUCTURE = phantom.name; SUBGOAL = TEMPF;" The important part is the SUBGOAL = TEMPF statement, which tells SPIRES the phantom structure will subgoal to the current temporary file.]

Once the RECDEF record is compiled, you may name it in either a SELECT command or an ATTACH TEMPORARY FILE command:

where "gg.uuu.recdef" is the RECDEF record key. (If it is your own RECDEF, you may replace "ORV.gg.uuu." with a period or asterisk.) As a shortcut, ATTACH TFILE works for ATTACH TEMPORARY FILE.

As soon as it is selected or attached, you may start putting data into your temporary file. You may batch data in, optionally using the WITH FAST option on INPUT BATCH; you may use input formats; you may add the records one by one; you may transfer and update records already in it; you may remove records or dequeue them. You can do just about anything you would normally do to update a SPIRES subfile.

And once the records are in your temporary file, you may use any non-index-based search and display commands and techniques to work with the records, including: SEQUENCE and SPISORT; dynamic and declared elements; dynamic phantom structures; the $REPORT and $PROMPT formats; Global FOR; Partial FOR; etc.

But the entire temporary file vanishes as soon as you leave SPIRES or CLEAR SELECT or select a different subfile. It cannot be recovered.

You may create the temporary file within a path when you have another subfile selected as the primary. Both the SELECT and the ATTACH TEMPORARY FILE commands may include the THROUGH prefix. Again, however, if you clear the primary subfile or if you explicitly clear the temporary file's path, the file's contents will be discarded.

You may also retrieve data from a temporary file by naming it as a subfile subgoal. For example, you can use the $LOOKSUBF function to retrieve data from the temporary subfile when you are working with another subfile:

The above function would look in the temporary file TEMPIDFILE for the record whose external key is 9382958 and return the value of the NAME element found there.

You may also compile format definitions against a temporary file's record definition. To do that, you need to tell the compiler that the format is for a temporary file (whose characteristics, which the compiler needs, are stored in a different place from a normal file's characteristics), by using the FORMAT-OPTIONS statement in the format definition:

Then you name the key of the RECDEF record in the FILE statement and any value in the RECORD statement of the format:

The MAXVAL setting for temporary files is 4K; as of this writing, there is no way for you to specify an alternative value yourself.

If you are loading records from another SPIRES subfile into a temporary file, a good method to use is the INPUT LOAD procedure, described in section 2.3.3a of the manual "SPIRES File Management". Online, [EXPLAIN INPUT LOAD PROCEDURE.]

To connect to a temporary file via a phantom structure, be sure to type the name with the "@ORV.gg.uuu.name" form in the phantom element's SUBFILE statement, e.g.:

24.1  DECLARE FILE

As an alternative to creating and attaching a temporary file by means of a RECDEF record, it is also possible to create them by means of DECLARE FILE through ENDDECLARE. The following sample commands show examples of this facility.

As you can see in the above example, you can define a file dynamically with essentially a RECDEF definition within the DECLARE / ENDECLARE pair. SPIRES reads the enclosed data and compiles the file and leaves it attached either as a primary file or through a path. An ENDECLARE (abbrev ENDE) statement must be given to conclude the process.

You can have one DECLARE FILE assigned to any one path, including the primary path, as shown above. DECLARE FILE creates a temporary file which acts like the DEFQ of a normal SPIRES file. [See 24.]

24.2  DECLARE SUBFILE subname

You can use a DECLARE SUBFILE statement to create a dynamic subfile with the following construct in the same manner as you previously can with the DECLARE FILE statement. [See 24.1.]

The following gives an example of this construct.

Note that the construction of the subfile name is in the same form as that shown in section 24 (@gg.uuu.name), and the name portion, SUBNAME, must meet all the requirements of an element name (16 characters or less, etc.). [See 24.]

25  The WITH DATA Option: Input Data as Part of the Command

The WITH DATA option gives you the opportunity to provide input data to SPIRES in the standard SPIRES format without having to put the data into your active file. In one case, for example, the amount of data you have for merging into a record is just a single element value, and you'd like to save yourself the bother of creating an active file just for the one element. In another case, you are writing a protocol to add some small records to a subfile, but writing an input format or trying to incorporate proper active file control would be extremely tedious.

For those situations or similar problems, you may be able to use the WITH DATA prefix, including the data you want to input on the record-input command (such as ADD or MERGE), following a semicolon.

Here is how the WITH DATA prefix is used:

where the "input-command" is ADD, UPDATE, MERGE, ADDUPDATE or ADDMERGE, with any other options it may have. Under Global FOR, the command may be MERGE. Under Partial FOR, the command may be ADD, UPDATE or MERGE. After the command comes a semicolon, followed by the input data, in standard SPIRES format. For full details about the standard format, see the manual "SPIRES Searching and Updating"; online, [EXPLAIN STANDARD FORMAT.]

Here are three sample commands using WITH DATA:

Notice that the data, as it always does in standard format, ends with a semicolon.

Note too that variable substitution, as shown in the last example, may be used. Keep in mind, however, that if you do use the "/" to request variable substitution, any time you want the literal characters "$" or "#", you should double them.

You are of course limited to the maximum command length, which is 160 characters under ORVYL control. However, in a protocol, using continuation lines, you can extend the data into multiple lines:

Leading blanks on each continuation line are stripped off. The maximum length of the input data is 4090 characters.

Any input format that is set is ignored when a command is issued with the WITH DATA option.

26  DECLARE EXTERNAL DATA

DECLARE EXTERNAL DATA allows you to manipulate DATA with external files using device areas, formats, and built-in commands. You do this with records in the $EXTERNAL file, or by means of DECLARE / ENDDECLARE as shown below: [See 8.2.]

      DECLARE EXTERNAL DATA
         FORMAT = External-format-name;
         AREA = External-data-region;
         COMMAND = What-to-do;
           .... (other options)
      ENDECLARE

where
   External-format-name is the name of an INPUT format which
      will be used to read the external file's data.
   External-data-region is the name of a SPIRES Device AREA
      that you have defined to hold the external file data.

Other input options include PARMS and TRACE which allow you to send PARMS to the format, or watch the input format action with TRACE (it traces ALL).

It is probably easiest to show an example of external file processing.

  -? use FILEX.TXT
  -? ..list
   ID = Record.1;  NAME = Bill;  CITY = Sunnyvale;;
   ID = Record.2;  NAME = Jill;  CITY = Mountain View;;
   ID = Record.3;  NAME = John;  CITY = Sacramento;;
   ID = Record.4;  NAME = Jack;  CITY = Mount Shasta;;
  -? declare external file
        KEY = ID;
        ELEMENT = NAME;
        ELEM = CITY;
     endeclare
  -Record definition compiled
  -? show elem char
  Record TEMPF of EXTERNAL
   Sec  Occ   Len  Type    St/El       Element
   ---  ----  ---  ------  -----       -------
   Req  Sing       String  00/00  key  ID
   Opt  Mult       String  00/01       NAME
   Opt  Mult       String  00/02       CITY
  -? define area areax(1,80) on file
  -? assign area areax to os file FILEX.TXT input
  -? declare external data
        format = $input;
        area = areax;
        command = scan;
     endeclare
  -? set format $report id name city
  -? for subfile where city string mount
  -? display all
  May 28, 1992                                              Page 1

  id                   name                 city
  -------------------  -------------------  -------------------
  Record.2             Jill                 Mountain View
  Record.4             Jack                 Mount Shasta
  -? endf
  -End of global FOR
  -? close area areax
  -? clear select

As you can see, SPIRES was able to input the data in FILEX.TXT (it could have been the ACTIVE file) using the $INPUT format (which inputs standard SPIRES form) by means of the DECLARE EXTERNAL DATA / ENDDECLARE command (scan) when the "display all" command occurred. You can use your own INPUT format to retrieve data from a device area you have defined.

Note that nothing actually goes into the external file. The external file only acts as a conduit for mapping the external data. Processing of external data occurs when you issue some command against the external file, such as the "display all" in the example above. That's why the assigned area (areax) isn't closed until processing is done.

The example above shows command lines (-?) and DECLARE / ENDDECLARE definitions all of which must be in a protocol to allow DECLARE / ENDDECLARE to be processed. The $EXTERNAL record-definition (in RECDEF) is used to process DECLARE EXTERNAL DATA.

You could also use path processing, SET DECLARE PATH, and WITH DECLARE commands directly at the terminal, or in a protocol, to get the same effect. [See 6.2.] In that case, you would use your own EXTERNAL DATA definitions in the EXTERNAL subfile.

  -? thru ext select EXTERNAL
  -? set declare path ext for external data
  -? with declare *mydef declare external data

The three commands above would replace the DECLARE EXTERNAL DATA / ENDDECLARE in the original example. Your *mydef record in the EXTERNAL subfile defines the format, area, and command.

27  Hierarchical Records in Multiple Table Output: DEFINE TABLE

The 1990's version of DEFINE TABLE packages a rather complicated technique to deliver multiple table output from a single pass of SPIRES hierarchical records in a relatively simple tool. This chapter describes the packaging at the SPIRES command level with the use of several new SPIRES commands.

Those commands are:

27.1  The DEFINE TABLE Command

The syntax of the DEFINE TABLE command is:

where:

The following example demonstrates the use of the DEFINE TABLE and GENERATE TABLES [See 27.2.] commands:

->  select reqs
->  restore stack.table.example
-Stack: 3 RECORDS
-> type reqno requestor po.number order.vname item.number po.quantity po.unit

REQNO = M22747;
REQUESTOR = SAM SMITH;
PO.NUMBER = M227470;
  ORDER.VNAME = SACRAMENTO MEDICAL FOUNDATION;

REQNO = SA7426;
REQUESTOR = JUDY JONES;
PO.NUMBER = SA74260;
  ORDER.VNAME = BOEHRINGER-MANNHEIM BIOCHEMICAL;
  ITEM.NUMBER = 1;
    PO.QUANTITY = 1;
    PO.UNIT = EA;
  ITEM.NUMBER = 2;
    PO.QUANTITY = 1;
    PO.UNIT = EA;

REQNO = RT9621;
REQUESTOR = SAM JONES;
PO.NUMBER = RT96210;
  ORDER.VNAME = STANFORD BOOKSTORE;
  ITEM.NUMBER = 1;
    PO.QUANTITY = 3;
    PO.UNIT = EA;
  ITEM.NUMBER = 2;
    PO.QUANTITY = 1;
    PO.UNIT = EA;

->  define table order for each po.number (type=BTF) as PO.Number Rqstr Vend
->  define table items for each item.number (type=BTF) as PO.Number
        Item.Number PO.Quantity PO.Unit
->  for stack
->  in active generate tables (order,items) all

->  list

100|DATA|ORDER;
101|PO.Number|Rqstr|Vend;
103|7|20|30;
|M227470|SAM SMITH|SACRAMENTO MEDICAL FOUNDATION;
|SA74260|JUDY JONES|BOEHRINGER-MANNHEIM BIOCHEMICAL;
|RT96210|SAM JONES|STANFORD BOOKSTORE;
109|End of table;
100|DATA|ITEMS;
101|PO.Number|Item.Number|PO.Quantity|PO.Unit;
103|7|3|3;
|SA74260|1|1|EA;
|SA74260|2|1|EA;
|RT96210|1|3|EA;
|RT96210|2|1|EA;
109|End of table;

The TYPE=SQL Table Option

You can use DEFINE TABLE to create SQL INSERT INTO and DELETE FROM statements. To do so, you include the TYPE=SQL table option in the DEFINE TABLE command, following it with either ADD or REMOVE, depending on whether you want to create INSERT INTO or DELETE FROM commands. If you want to do both, because you are doing updates, create two separate tables with separate DEFINE TABLE commands.

You may also want to use the TYPE=NUMERIC element option (that is, you specify it after naming a numeric element); this means SPIRES will not put quotation marks around the numeric element value, which it does by default for non-"TYPE=NUMERIC" elements.

Here is an example of DEFINE TABLE using the TYPE=SQL option:

The above commands generate the following output:

The DELETE statements are created from the SQL,REMOVE table and the INSERT statements are created from the SQL,ADD table. The relational table name "restaurant_name" comes from the TITLE statement. The "r_id" column name in the where clause of the delete statement comes from the COL='r_id' element parm. Note that the numeric restaurant id number is not quoted in either the DELETE or INSERT statements due to the use of the TYPE=NUM element parm. Each INSERT or DELETE statement is followed by a "go" statement so that the DEFINE TABLE output can be used as an input script for ISQL.

Writing Tables to OS Files Instead of the WYLBUR Active File

The ON FILE option lets you write tables directly to OS files instead of the WYLBUR active file. This is useful in circumventing the 235-characters-per-line limit of WYLBUR data sets.

You can send some tables to OS files and one or more tables to your active file at the same time. For OS files, each table must be assigned to a separate file. If you accidentally assign multiple tables to one file, the last table defined is the one that appears in the file. However, if you assign multiple tables to the active file by using the ON FILE ACTIVE option multiple times, they will all appear in the active file, one after the other.

Note: if you are assigning one or more tables to OS files, do not add the IN ACTIVE option to the GENERATE TABLES command; that will override the ON FILE clause, causing all tables to be generated in the active file.

Below is an example of DEFINE TABLE commands used to define one table in an OS file and another in the active file:

The CITY table is placed in the OS data set called R.CITIES, on a scratch volume. Remember that the NAME table is placed in the active file even though the GENERATE TABLES command did not have the IN ACTIVE prefix on it; had it been included, both tables would have been placed in the active file, which is not what was intended.

The OS data set is not allocated until the GENERATE TABLES command is issued.

27.2  The GENERATE TABLES Command

The tables are actually created by the GENERATE TABLES command, which is issued under Global FOR. The syntax of the GENERATE TABLES command is:

27.3  The CLEAR TABLES Command

The syntax of the CLEAR TABLES command is:

This command is used to eliminate the definition of a previously defined table (specified as "name") or all the currently defined tables if no single table is named.

27.4  The SHOW TABLES Command

The syntax of the SHOW TABLES command is:

This command is used to display the definitions of all tables currently defined. Use the IN ACTIVE command with its standard options to put the table definitions into your active file.

The ALL option shows a full listing of all tables currently defined, including tables defined on other paths.

28  The PERFORM Commands

The PERFORM commands are system utilities designed to do a variety of tasks frequently needed by the SPIRES user. Because they are system protocols, they are able to use all the facilities of SPIRES and WYLBUR to do such things as print listings of source code and documentation, and create "standard" file definitions for commonly used applications, such as protocol files. The following PERFORM commands are currently available:

All but the last three are described in this chapter; the last three are described in the "SPIRES File Management" manual. Online, issue the EXPLAIN command, followed by the specific PERFORM command, for more information, or issue "PERFORM ?" to get a list.

Note that the PERFORM command can have a sub-system option immediately following the word PERFORM. These options are: PROduction, PREprod, TESt, or DEVelopment. The default sub-system is PROduction. These sub-system options cause P,N,T,D versions of other macros to be executed. For example: PERFORM DEV TABLE CREATE ... will execute the .DPERFORM.TABLE.CREATE macro instead of the normal .PPERFORM.TABLE.CREATE version.

28.1  The PERFORM PRINT Command

The PERFORM PRINT command creates source code listings formatted such that the structure of the code is easily discernible, thus making the listing easier to work with. By default, the PERFORM PRINT command prints listings on GBAR forms, with selected lines in boldfaced type. However, you can change the default formatting if, for instance, you want the listing to go to the self-service printer or a local printer, or you want different paper or character sets. If there are multiple records being printed (ALL, RESULT or STACK option), each new record will begin on a new sheet of paper.

The PERFORM PRINT command requires that you select the file containing the source code before using the PERFORM PRINT command to obtain a listing. The files that may be selected when the PERFORM PRINT command is issued are:

For protocol files, it is assumed that the file was generated by one of the system utilities (i.e., ..BUILD.PROTOCOLS.FILE or PERFORM BUILD PROTOCOL). If not, the PERFORM PRINT command may not work properly for these files. [See 28.1.3.]

The syntax of the PERFORM PRINT command is:

The parameter list is optional (though as shown, it must be enclosed in parentheses) and provides a way of explicitly altering the default print parameters or invoking other options. You may include as many of the possible parameters as you wish, separated by commas, and in any order. The possible parameters are:

The only required option of the PERFORM PRINT command is the specification of the record(s) you want printed. You may print a single record by specifying the record key, all the records in a RESULT or STACK, or ALL the records available to your account. A record-key value of ALL, STACK or RESULT (or an abbreviation) must be enclosed in quotes or apostrophes so that it will not be confused with these options. Multiple word keys (as some protocols have) must also be enclosed in quotes or apostrophes.

Print options are any valid WYLBUR print options (COPIES, BIN, etc.). Basic print defaults are FORMS GBAR and CHARS(TN12,BD12). You may also specify DUPLEX if the output is going to a printer that prints on both sides of the sheet of paper; the output will be printed on both sides, with each new record starting on an odd-numbered page.

The PERFORM PRINT command places the listing at the end of the active file and removes it after submitting the print job, so you will not be prompted "Ok to Clear?". If you add the IN ACTIVE prefix to the PERFORM PRINT command, the listing will be placed in your active file so that you may save it for future reference. In this case, you must then issue a PRINT command to receive a printed listing.

28.1.2  Error messages

You may encounter one of several error messages when using the PERFORM PRINT utility. Most of these messages are self-explanatory, such as "No subfile selected" or "Requested record does not exist".

However, if you receive the message "PRINT unsuccessful", it could mean a number of things. In some cases, you might have specified WYLBUR print options incorrectly, or SPIRES might have encountered a CORE EXHAUSTED condition. Sometimes, SPIRES cannot write data within the line length you have set. When you receive the "PRINT unsuccessful" message, you will also see the WYLBUR or SPIRES diagnostic message concerning the error. If you cannot interpret the diagnostic message, see your SPIRES consultant.

28.1.3  Printing from Protocols Files

The PERFORM PRINT command can be used to print protocol listings from a private protocols file if that file was created with the PERFORM BUILD command or the BUILD.PROTOCOLS.FILE public protocol. [See 28.3.] These files use the $PROTOCOL format for input and display of protocol records; this format allows you to imbed formatting instructions in the protocol that can be used by the PERFORM PRINT command. These instructions are -$PAGE, -$FRONTPAGE, -$TITLE, -$FORMAT, -$AUTHOR, and -$SUBJECT. [See 28.3.3 for details about the $PROTOCOL format.]

The listings from a protocols file differ slightly from those from the other files described above. They will have a page header with the name of the protocol, the name of the file, and the date and time the listing was generated. If there is a $TITLE statement in the protocol, its value will also be included in the header. A page footer will be generated, including the protocol name and the page number.

The line length of the listing is determined by the printer and forms used. The line numbers, however, will remain accurate; that is, the line numbers of the listing will correspond with the line numbers of the protocol as transferred from the file, even though some lines might wrap to multiple lines of the printed listing.

A WYLBUR command, PFORMAT, can be used to cause protocols to be formatted attractively for easy reading. Put the protocol into your active file and issue the PFORMAT command. When you use the formatted active file as your input record, the $PROTOCOL format will retain the formatting.

WYLBUR will indent lines like this:

When PERFORM PRINT is used to print multiple listings of protocols (i.e., with the ALL, RESULT, or STACK options), a table of contents is generated at the end of the listing.

28.2  The PERFORM PUBLISH Command

The PERFORM PUBLISH command prints copies of SPIRES documents. It may or may not be available on the computer where you use SPIRES; contact your SPIRES consultant for further information. (At Stanford, for instance, it is not available; however, you may instead use the PUBLISH command to print documents there.)

28.3  The PERFORM BUILD Command

The PERFORM BUILD command initiates the creation of a file definition for a predefined "standard" SPIRES application. Currently, only a protocols file can be built using this command, but future possibilities include a bibliographic file and a mail list file.

The syntax for the command is

where "option" is a single term designating the kind of file you wish to generate. The command may be preceded by an IN ACTIVE prefix, in which case the file definition will be placed in your active file, but not added to FILEDEF or compiled.

See Section 28.3.2 for information about the structure of the file definition created by this utility.

28.3.1  The PERFORM BUILD PROTOCOLS Command

The PERFORM BUILD PROTOCOLS command will generate a protocols file based on the standard definition described below in Section 28.3.2. After you issue the command, you will be given instructions and asked several questions about what you wish the file to be named and which accounts you wish to give access to the file. The protocol will generate the file definition, add it to FILEDEF, and compile it for you. Following is a sample session with instructions:

28.3.2  PROTOCOLS File Definition

The file definition for the protocols file created by the PERFORM BUILD PROTOCOLS command contains the following elements:

A single record definition for the goal record is kept as GG.SPI.PROTOCOL in the RECDEF file. This enables the SPIRES group to upgrade the protocols file definition without requiring every user to recompile their file definition.

The protocols file contains one goal record-type and two indexes -- MODDATE and SUBJECT. The former is a standard SPIRES date index, the latter is a word index built from the SUBJECT element. The searchterms are:

A formats statement is included in the subfile section to set the system format $PROTOCOL whenever you use the file. [See 28.3.3.]

28.3.3  $PROTOCOLS format

The $PROTOCOL format works with the protocols files created with the PERFORM BUILD PROTOCOLS command and those created by the older ..BUILD.PROTOCOLS.FILE public protocol. It handles all output (display and transfer) requests and all input (ADD, UPDATE, MERGE) requests. In addition, there are frames for multiple record processing under report mode when using the PERFORM PRINT utility to print several protocols. [See 28.1.3 for details about the PERFORM PRINT utility.]

There is one form of record display used for both display requests and transfers, and this form is also used for input. Leading blanks are not stripped from lines on input, so you can indent your commands as you please, or you can allow the PERFORM PRINT utility to do all the formatting for you.

The first line of the display starts with an asterisk, followed by the protocol name, then DEFDATE, MODDATE, MODTIME, and MODACCT in parentheses:

The remainder of the display consists of the protocol commands, one per line.

Imbedded formatting instructions are defined that may alter the way text is formatted through the PERFORM PRINT utility. A formatting instruction begins with a '-$' or '- $' followed by a keyword. The keyword may be in upper or lower case and may be abbreviated to three characters. It may be followed by additional information, but if a value is not defined for a particular keyword, then anything after the keyword is ignored; the additional information is stored, but otherwise does nothing to affect the display. If the keyword takes an option, then anything after the option is ignored.

For regular displays and transfers, the lines with formatting instructions are output simply as lines of data. Under the PERFORM PRINT utility, the leading '-$keyword' never prints, though data following might. The AUTHOR and SUBJECT statements always print.

The formatting instructions are:

The "-$" line prefix also identifies two other special cases -- the SUBJECT and AUTHOR fields:

28.4  The PERFORM FILEDEF SUMMARY Command

The PERFORM FILEDEF SUMMARY command produces a summary presentation of a file definition record, showing information about each of the three major sections of the file definition:

The command can be issued only by the file owner. Since the summary is of the file definition, rather than the file, this command can be used whether the file is compiled or not.

The syntax for the command is

where, if specified, "filename" designates the name of the file for which the summary information is desired. If "filename" is not specified, the file containing the currently selected subfile is assumed, if that file belongs to you. If "filename" is not specified and there is no subfile currently selected, you will be asked to supply a filename.

The information in the display includes the following:

Here is a simple example that demonstrates some of the command's features. Note that the sample display is condensed horizontally so that it fits the printed page better.

28.5  The PERFORM FORMAT LIST Command

The PERFORM FORMAT LIST command produces a list of all of the formats that have been written for a file. The information displayed contains the FORMAT-NAME, the record-type for which the format is written, the source record key of the format, and the date that the format was last compiled. The information this command displays can be very useful when the file owner needs to move or copy a file to another account or get rid of the file completely, since it shows what formats would need to be copied or "zapped".

Only the file owner can see all the formats; individual users will see the information for only those formats, if any, that belong to them; that is, only those formats whose definition record keys begin with the user's account.

The syntax for the command is

where, if specified, "filename" designates the name of the file for which the format list is desired. If "filename" is not specified, the file containing the currently selected subfile is assumed, if that file belongs to you. If "filename" is not specified and there is no subfile currently selected, you will be asked to supply a filename.

You may precede the command with the IN ACTIVE prefix if you want to direct the output to your active file.

Here is an example that demonstrates the PERFORM FORMAT LIST command. Note that the display here has been condensed horizontally so that it fits better on the printed page.

Caution: The listed formats do not include general-file formats (those that have the GEN-FILE statement in their definitions), even if they are defined for the selected file.

28.6  PERFORM SYSTEM PRINT Command

The PERFORM SYSTEM PRINT command is a SPIRES interface for the original mainframe PRINT command. By handling PRINT in SPIRES, it is possible to transform mainframe PRINT into whatever equivalent facility exists on the platform where SPIRES is running. On the mainframe, PERFORM SYSTEM PRINT simply becomes QUIETLY TRY PRINT. On Unix-SPIRES, PRINT becomes PERFORM SYSTEM PRINT. This description is for Unix-SPIRES.

The syntax of this command is:

In the option description below, the following terms are used:

Option words are shown in UPPER-lower case, with the UPPER case portion required, and the lower case portion optional. For example: DUPlex can be specifed as: DUPLEX, DUPLE, DUPL, DUP in mixed case, such as: DuPleX.

The following options are recognized, and acted upon:

The following options, although recognized, are ignored:

The following descriptions detail the options that are acted upon:

[See 28.6.1.]

28.6.1  FONT TABLE

The following is a CHARS to FONT translation table used on Unix SPIRES. The column of two character codes on the left represent the first two characters on the CHARS code. CHARS codes can be three or four characters long and begin with one of these two character codes. The last one or two characters of a CHARS code is either a pair of digits, such as 10, 12, 15, or a single digit from 1 through 9. Thus, the CHARS code of TN8 would be interpreted as TN08. The CHARS codes given in a PERFORM SYSTEM PRINT command are translated using this table. For example:

The FONT names are in the second column. They replace the first two characters of the CHARS code, which is then combined with a pointsize calculated from the last two characters of the CHARS code, becomming: FontnamePointsize. For all CHARS codes except those beginning with "C", nn already represents the Pointsize value. For "Cx" CHARS codes, CHARS specifies the number of characters per inch, which is converted to Pointsize by the formula: (121/nn). Thus, CU10 has a Pointsize of 12, but TN10 has a Pointsize of 10, even though both codes specify the "Courier" font.

NOTE: You may also supply FONT and size directly for any CHARS value, such as: CHARS=Symbol9 .

The default fonts for the font-selectors are:

All default fonts have a Pointsize of 10.

Fonts are selected by font-controls in column 1 when there is no carriage-control in column 1, or by font-controls in column 2 when there is carriage-control in column 1, or by imbedded font selectors (bracket surrounded codes like <.b+> or <.b->). Use Wylbur's HELP FORMAT command to learn more about imbedded selectors.

Specifying the CC option indicates carriage-control in column 1. Specifying multiple fonts, as in: chars=(font0,font1,font2,font3), signifies font-controls in column 1 or 2 (depending upon NOCC or CC). If less than four CHARS values are specified, the remaining are taken from the default fonts list. Specifying the TRC option also signifies font-controls, with or without CHARS. [See 28.6.]

28.6.2  FORMATTING-CHATACTER TABLE

The following are two formatting-character tables used on Unix SPIRES. The first is the <.font-selector> table. font-selectors may be given in UPPER, lower, or Mixed case. The second is a <:special-symbols> table, all of which are case sensitive, followed by what the symbols translate to in the print output.

The following <.font-selectors(plus or minus)> are stripped:

For example: <.F+> or <.f-> or <.BELL> are all stripped. (Note: If not LPRINT, the colors are converted to Postscript if the COLOR option is given in the PRINT command; otherwise colors are stripped.)

[See 28.6.]

28.6.3  DESTINATION ATTACHED and POSTSCRIPT

If the DEStination option is supplied with the word "attached", then output is meant to be sent to your attached printer, which is usually the printer directly connected to your desktop computer (acting as a terminal). The printer name, "attached", must be given exactly that way, all lower case, no abbreviation.

This option signals that the destination printer accepts Postscript. By default, that is what "env:printer" and "Department Printer" use. The "attached" printer is assumed to be non-Postscript, or the terminal program you are using doesn't pass-through the Postscript. But if your "attached" printer does handle Postscript, and your terminal program passes the file through (or you configure it to pass-through), then add the PS option to dest=attached to send Postscript to your attached printer.

Handy for "env:printer" or "Departmental Printer" which does NOT handle Postscript. Basically, using this option should product non-Postscript output, just like for attached printers (without PS option).

If you use the "html" option, output is always NOPS. If you use the "file" option, data is sent to that file, either as Postscript, html, or plain text depending upon choice of "env:printer", "Department Printer", or "Attached Printer", and the PS or NOPS options (or defaults).

On the mainframe, if you logged on using Samson, and set your terminal to "samson", there could be a dialog between Samson and the mainframe informing the mainframe of your attached printer type. This resulted in the automatic setting of the LPRINT option on the mainframe (SHOW LPRINT). But there is no such dialog with Unix systems, even for Samson. Therefore, the type of attached print is NOT know to Unix software. That is why you might have to indicate PS or NOPS, so print output can be sent to your attached printer in a form is can handle. [See 28.6.]

28.7  PERFORM SYSTEM SEND Command

The PERFORM SYSTEM SEND command is a SPIRES interface for the original mainframe SEND command. By handling SEND in SPIRES, it is possible to transform mainframe SEND into whatever equivalent facility exists on the platform where SPIRES is running. On the mainframe, PERFORM SYSTEM SEND simply becomes QUIETLY TRY SEND. This description is for Unix SPIRES.

The syntax is as follows:

Where <user-list> can be the following:

And <user> can be any of these:

For example:

28.8  PERFORM SYSTEM MAIL Command

The PERFORM SYSTEM MAIL command is a SPIRES interface for the original mainframe MAIL command. By handling MAIL in SPIRES, it is possible to transform mainframe MAIL into whatever equivalent facility exists on the platform where SPIRES is running. On the mainframe, PERFORM SYSTEM MAIL simply becomes QUIETLY TRY MAIL. This description is for Unix SPIRES.

The syntax is as follows:

Where <user-list> can be the following:

And <user> can be any of these:

For example:

In the option description below, the following terms are used:

Option words are shown in UPPER-lower case, with the UPPER case portion required, and the lower case portion optional. For example: SILent can be specifed as: SILENT, SILEN, SILE, SIL in mixed case, such as: SiLenT.

The following options are recognized, and acted upon:

The following options, although recognized, are ignored:

The word TO following MAIl is optional.

28.9  PERFORM CHNGREF

The basic syntax of PERform CHNGREF is as follows:

   PERform CHNGREF "this" to "that" [IN "these"] [FOR element]

The word "to" between this and that is optional, and "this", "that", and "these" may be contained in quotes or apostrophes or nothing if just a single word. "that" can be a null string, which means you must give it in delimiters. Likewise for any values with blanks anywhere. A leading semi-colon must be delimited to distinguish it from a comment.

Here are some examples:

 1.  PERform CHNGREF this to that ; in all
 2.  PERform CHNGREF "this" 'that' in these
 3.  PERform CHNGREF label ''
 4.  PERform CHNGREF "this" to "that" in 5
 5.  PERform CHNGREF "%title%" to "The true Title" for EXP

Here are more verbose versions of the above:

 1.  PERform CHNGREF "this" TO "that" IN "this"
 2.  PERform CHNGREF "this" TO "that" IN "these"
 3.  PERform CHNGREF "label" TO "" IN "label"
 4.  PERform CHNGREF "this" TO "that" in 5
 5.  PERform CHNGREF "%title%" TO "The true Title" IN "%title%" FOR EXP
 Number 3 changes the "label" string to null.
 Number 4 makes the change in only the 5th occurrence of the element.
 Number 5 makes the changes to the EXP element.

If there is no ending [FOR element], then the 2nd element of the record-type is assumed. That normally is the element immediately following the KEY of the record, like COMMAND in protocols. But you can specify any record-level element with: FOR element.

When [IN "these"] occurs, only elements that contain the string "these" are chosen for making the "this" to "that" change. When [IN "these"] is not specified, "this" takes its place. Be aware that 'in all' does NOT mean in all occurrences of the element; it means in all occurrences with the string "all".

You can specify a positive-non-zero number in place of "these" following the "IN" keyword. When given that way, only that specific occurrence is chosen for the "this" to "that" change. The first occurrence is 1.

The actual change is case-sensitive, for both "this" and "that". The search for "these" (or default "this") is case-insensitive.

You may get serious diagnostics for things like syntax errors, no subfile selected, no referenced record, etc. You will also get an informational diagnostic (Nothing found to change) when there is no n-th element associated with "IN n", or the search for "these" (or default "this") made no find. All diagnostics are controlled by standard settings.

The purpose of this command is to make it easy for you to change a referenced record, as long as you are dealing with record-level.

 -> select explain
 -> reference "FIND COMMAND, SIMPLE"
 -> perform chngref "FIND" to "look"
 -> for *
 -> display

The element being changed is EXP, which is 2nd element. Note that if you had referenced "FIND COMMAND" instead, nothing would be changed, because that record doesn't contain the actual explanation. It contains a pointer to the actual record.

28.10  The PERFORM SYSDUMP Command

A handy all-purpose debugging tool when you need a snapshot of the current session is the PERFORM SYSDUMP command. PERFORM SYSDUMP places the output from the most commonly used commands for showing the status of the current session (e.g., SHOW SUBFILE INFORMATION, SHOW SUBFILE MAP) into your active file, along with the values of all dynamic, static, and system variables.

The syntax of the command is:

The IN ACTIVE prefix by itself adds nothing new to the command, since PERFORM SYSDUMP always directs its output to the active file. However, it is worth using if the active file is not empty, since you can include the CONTINUE, CLEAR or KEEP option so that you are not prompted "OK to clear?".

The output from the command starts with a line displaying the date and time that the dump began. It then displays the following information or output from commands:

PERFORM SYSDUMP can be used by application developers in Prism after using the SPIRES command to break out of Prism's full-screen environment.

29  Exporting Data from SPIRES to Other Programs: The Exporter

There are many occasions when you want to take data from a SPIRES file and use it as input for another program. For example, you might want to retrieve a set of records and do statistical calculations using SAS, or move data to a microcomputer for use with Lotus 1-2-3 or a data base management system such as Dataease.

The Exporter extracts and formats data from a SPIRES file so that it can be used by a number of other programs, either on the Data Center mainframe or on a microcomputer (usually, an IBM PC). After using the Exporter you will have an active file with data that you can transfer to a microcomputer or input to another package on the mainframe. Note, however, that Exporter only prepares the data in the proper format -- it does not actually place the data in the other package.

The programs currently supported are:

Lotus 1-2-3, Dataease, 10-Base, Knowledge Manager, and MultiMate are popular programs used on microcomputers. SAS and SPSSx are statistical programs available on the Data Center mainframe.

The SPIRES option allows you to convert SPIRES data to a "flat file" while keeping it in standard SPIRES format to place in another SPIRES subfile. [See 29.1.2 for an explanation of a flat file.] The "Other Programs" option allows you to create a tabular display of your data for a variety of reasons, such as creating input for your own programs or converting your data to a flat file for a report, or moving data to a package other than those listed.

Overview of The Exporter

When you first enter the Exporter, you are placed in a full-screen environment with a series of input screens for you to fill in. On these screens, you indicate the system to which you will be downloading the data, which elements from the subfile you want to be retrieved, and how you want the data formatted. For each system, the Exporter picks appropriate defaults for formatting, although you can override them. You then are placed in the Exporter command environment where you define the set of records from the subfile using standard SPIRES commands. The EXPORT command places the data from the selected records in your active file in the appropriate format.

To enter the Exporter:

For example,

Notes: 1) Your terminal must be able to support full-screen display for you to use Exporter. (Again, type HELP TERMINALS for information.) 2) The SET TERMINAL command is not necessary on an IBM 3270-style terminal.

29.1  The Exporter Input Screens

Across the top of each input screen is a banner that shows the name of the screen and the current date and time. To move around each screen use the cursor keys or tab key, filling in appropriate blanks. Across the bottom of each screen is the command line. Your command choices are listed here. You can either use the function keys listed next to each choice (e.g., F1 or F2), or type the command on this banner line to the right of the prompt marked "Your choice:".

The following commands are available from the Exporter input screens:

Note that pressing a function key automatically issues the command associated with it so that you never have to type the command -- pressing F1 accomplishes exactly the same thing as typing the word "HELP" on the command line. If the function keys do not work the way you expect them to, try pressing the escape key and a numeral instead (e.g., <escape> 3 instead of F3).

29.1.1  The Target System Selection Screen

The first screen you will see is the Target System Selection screen. This is where you indicate which system you intend to export the SPIRES data to. This screen has a display like this:

+-----------------------------------------------------------------+
|   TARGET SYSTEM                                                 |
|   ------------------------                                      |
| - Lotus 1-2-3                                                   |
| - Dataease                                                      |
| - 10-Base                                                       |
| - Knowledge Manager                                             |
| - MultiMate                                                     |
| - SAS                                                           |
| - SPSSX                                                         |
| - SPIRES                                                        |
| - Other Programs                                                |
+-----------------------------------------------------------------+

You must select one and only one target system. Move the cursor to the area to the left of the desired system, type an "X", and press the F2 key.

29.1.2  The Data Structure Selection Screen

After you select your target system, for all target systems except Dataease 10-Base and Knowledge Managers, you will move on to the Data Structure Selection screen. The system you selected appears in reverse video and a data structure will be pre-selected for you. This screen has a display like this:

+-----------------------------------------------------------------+
|  TARGET SYSTEM             DATA STRUCTURE                       |
|  -----------------------   --------------                       |
|  Lotus 1-2-3             - Hierarchical                         |
|  Dataease                - Flat File                            |
|  10-Base                                                        |
|  Knowledge Manager                                              |
|  MultiMate                                                      |
|  SAS                                                            |
|  SPSSX                                                          |
|  SPIRES                                                         |
|  Other Programs                                                 |
+-----------------------------------------------------------------+

Put simply, the difference between hierarchical data and data in a flat file is that data in hierarchical form can occur in structures and data in a flat file cannot. (A flat file is also known as being in "first normal form".) In other words, in a flat file a separate record must be created for each possible combination of related elements. For example,

         Hierarchical                      Flat File

       Distributor Item                  Distributor Item
       ----------- ----                  ----------- ----
       Distco      Soap                  Distco      Soap
                   Rubber                Distco      Rubber
                   Fish                  Distco      Fish

Note that data for Dataease, 10-Base, and Knowledge Manager, the three microcomputer data base management systems supported, must always be structured in a flat file. If you chose one of these systems for your target, Exporter will pass over the Data Structure screen and take you directly to the Element Specification screen. [See 29.1.3.] For all other systems, you may change the default data structure if you wish.

After choosing the data structure, continue to the next screen by pressing F2.

29.1.3  The Element Specification Screen

This screen is where you specify which elements you want to be extracted from your SPIRES subfile. You can request up to ten elements, and they should be listed in the order they will be read by the target system. It includes a display like this:

+------------------------------------------------------------------+
| ELEMENT 1   ELEMENT 2   ELEMENT 3     ELEMENT 4     ELEMENT 5    |
|                                                                  |
| ___________ ___________ _____________ _____________ ____________ |
|                                                                  |
| ELEMENT 6   ELEMENT 7   ELEMENT 8     ELEMENT 9     ELEMENT 10   |
| ___________ ___________ _____________ _____________ ____________ |
|                                                                  |
+------------------------------------------------------------------+

Type the name of each element, using the TAB key or the arrow keys to move across the screen. For instance, if you wanted to see data for the elements ITEM, QUANTITY, and PRICE, type the element names as shown here:

+------------------------------------------------------------------+
| ELEMENT 1   ELEMENT 2   ELEMENT 3     ELEMENT 4     ELEMENT 5    |
|                                                                  |
| item_______ quantity___ price________ _____________ ____________ |
|                                                                  |
| ELEMENT 6   ELEMENT 7   ELEMENT 8     ELEMENT 9     ELEMENT 10   |
| ___________ ___________ _____________ _____________ ____________ |
|                                                                  |
+------------------------------------------------------------------+

If you do not know the names of all the elements, issue the SHOWELEM command (or press the F5 key) to see a list of the elements in your subfile. You may move back and forth between the Element Specification screen and the element list display until you get all the elements typed on the screen.

When you have finished filling in the elements, issue the CONTINUE command or press the F2 key, to move on to the next screen.

29.1.4  The Field Layout Screen

This screen allows you to change the default column width and element type for each element. For MultiMate, SPIRES, and Other Programs, no element type choice is given.

The default width is taken from element information in the file definition or calculated by the Exporter; it can be up to 235 characters for all fields combined. (Note that the 235 characters includes the blank spaces between each column, so the actual limit will depend on how many columns you define.) For all systems except MultiMate, values that exceed this width will be truncated. With MultiMate, the values are wrapped within the column dimensions.

Elements can be one of two types: TEXT or NUMERIC. If the file's definition has an element information value for VALUE-TYPE that is NUMERIC, the element type will be NUMERIC in the Exporter; otherwise, it will default to TEXT. Specify the type of an element to be numeric if the values of that element are numeric and you will want to perform arithmetic calculations on them. (Note: if the file's definition has an edit mask defined in an element information packet for an element declared to be numeric, dollar signs and commas will be stripped from the values.)

The field layout display looks like this:

+------------------------------------------------------+
|  ELEMENT                        WIDTH       TYPE     |
|  ----------------------------   -----       ----     |
|  ITEM                           30          TEXT     |
|  QUANTITY                       20          TEXT     |
|  PRICE                          20          NUMERIC  |
+------------------------------------------------------+

Elements specified as text will be left-justified within the column; any missing values will be blank. Elements that are numeric are right-justified in the column. For Lotus 1-2-3, a missing value is listed as ""; for 10-Base and Knowledge Manager, missing values appear as 0; for all other systems, missing values are left blank.

The TYPE column does not appear if you have selected MultiMate or Other Programs. This screen does not appear at all if you have selected SPIRES.

29.2  The Exporter Command Environment

After you leave the input screens by either completing the last screen, or issuing the DONE command, the system places you in a command environment as opposed to a full-screen environment. The command prompt begins with a colon (":->") to remind you that you are still in Exporter and have access to commands that are unavailable in other parts of SPIRES. All SPIRES and WYLBUR commands are once again available, but special options associated with Exporter input, such as BACKUP, can no longer be used. Thus for help you would now type a "?" or the word "HELP" instead of pressing the F1 key.

Special Exporter Commands Available

At any time while you are in the Exporter command environment, you can issue any of the following commands:

     HELP                 describes the commands that are available
                          to you at the time;

     DEFINE               returns you to the first screen in input mode
                          where you can modify the options you have
                          specified for the data so far;

     DEFINE *             returns you to the input screen that you
                          most recently modified;

     DEFINE CLEAR         returns you to the first input screen to begin
                          a brand new formatting definition; the
                          previous definition is erased;

     RETURN               returns you to SPIRES.

After you have defined the format for your data, you must specify the group of records you want retrieved from the subfile. You do this by specifying a Global FOR class. Global FOR allows you to define a set of records based on criteria other than record keys or indexed values, such as all records in the subfile, all records in the current search result, or all records added today. (For more information on Global FOR, see the manual "Sequential Record Processing in SPIRES.")

Below are two examples using the most commonly used Global FOR classes:

The EXPORT command places the data from the records in the specified class into the active file.

Syntax of EXPORT Command

The complete syntax of the EXPORT command is:

The options allow you to process only a portion of the class of records, as with other commands under Global FOR. For example, the command EXPORT 5 will place five records from the class of records in the active file. The default is ALL, so if you issue the command EXPORT without any options, all the records in the class will be retrieved. It is often a good idea to export only a few records the first time with the "EXPORT n" command to insure that your formatting instructions produce the desired result.

The CLEAR option indicates that it is all right to clear your active file before placing data in it. If you do not include it and there is data in your active file, you will be asked if it is OK to clear your active file.

Once you issue the EXPORT command, your Global FOR class is cleared. Your formatting definition is still in effect, however, and you can establish another class of records and export them according to the same formatting definition, as shown in the above example.

29.3  Importing Your Data

Once the Exporter has placed the data in your active file, you must import the data to your selected system by following the procedures for that particular system. The Exporter provides brief instructions on how to import the data and where to look in that system's documentation for more information.

If you are moving the data to a system on a microcomputer, you will have to use a file transfer program such as Samson, the ITS file transfer program for the IBM PC, to download the data from the Data Center mainframe to the microcomputer. (Type HELP SAMSON for more information about the Samson program.)

The following sections describe the procedures for each system.

Lotus 1-2-3

If you need help importing the data into Lotus 1-2-3, see the section in the Lotus 1-2-3 documentation called "Importing disk files,/File Import (/FI)." The following information will help you with the importing process:

Dataease

If you need help importing the data into Dataease, see the section in the Dataease documentation called "Data Import Facility." The following information will help you with the importing process:

10-Base

If you need help importing the data into 10-Base, see the section in the 10-Base documentation called "Reformatting and Loading Data From External Files -- The Bridge Option." (Note: When you are ready to import this data to 10-Base, when asked for "Source TYPE", you should specify "DELIMITED ASCII.")

Knowledge Manager

If you need help importing the data into Knowledge Manager, see the section in the Knowledge Manager documentation called "How to Attach the Records of a File to a Table."

MultiMate

If you need help importing the data into MultiMate, see the section in the MultiMate documentation called "FILE CONVERSION UTILITY". (Note: when you are actually ready to import the data you will select the option "ASCII To MultiMate" from the File Conversion Menu.)

SAS

The Exporter will add an INPUT statement to the top of the data which may be used in your SAS program. It will need to be deleted before the data can be used.

SPSSX

The Exporter will add a DATA LIST statement to the top of the data which may be used by your SPSSX program. It will need to be deleted before the data can be used.

SPIRES

Your data is placed in the active file in native SPIRES format. You may then build/update your subfile via the BATCH command in SPIBILD.

Other Programs

Your data is placed in the active file in a tabular format without delimiters for you to use as you see fit.

29.4  Sample Data

Below are examples of data formatted by the Exporter for four different systems. In each case, default values were accepted as appropriate; element width and type were changed to prevent truncation of long text values and to declare as numeric the quantity and price values.

The first sample shows the data in standard SPIRES format. The second is for Lotus 1-2-3 in hierarchical format. The third is for Dataease in first normal form. The fourth is for SAS in first normal form.

SPIRES

ADDUPD;
 NAME = Fly Pie Shoe Shop;
 PART = Electric Shoe Horns;
   QUANTITY = 23;
   PRICE =  1.25;
 PART = Sock Parts;
   QUANTITY = 9;
   PRICE =  0.95;
 PART = Toe Girdles;
   QUANTITY = 44;
   PRICE =  1.50;
 PART = Plaid Shoe Polish;
   QUANTITY = 3;
   PRICE =  2.98;
;
ADDUPD;
 NAME = Good Man Hardware;
 PART = Inflatable Doorknobs;
   QUANTITY = 12;
   PRICE =  1.79;
 PART = Hammer Cleaner;
   QUANTITY = 12;
   PRICE =  1.00;
 PART = Staple Gun Sights;
   QUANTITY = 15;
   PRICE =  1.15;
;
ADDUPD;
 NAME = 3-D House of Beauty;
 PART = Pancake Hair Spray;
   QUANTITY = 32;
   PRICE =  3.89;
 PART = Eyelid Wax;
   QUANTITY = 15;
   PRICE =  2.50;
 PART = Dimple Putty;
   QUANTITY = 4;
   PRICE =  3.50;
;

Lotus 1-2-3

"Fly Pie Shoe Shop        " "Electric Shoe Horns      "           23       1.25
"                         " "Sock Parts               "            9       0.95
"                         " "Toe Girdles              "           44       1.50
"                         " "Plaid Shoe Polish        "            3       2.98
"Good Man Hardware        " "Inflatable Doorknobs     "           12       1.79
"                         " "Hammer Cleaner           "           12       1.00
"                         " "Staple Gun Sights        "           15       1.15
"3-D House of Beauty      " "Pancake Hair Spray       "           32       3.89
"                         " "Eyelid Wax               "           15       2.50
"                         " "Dimple Putty             "            4       3.50

Dataease

Fly Pie Shoe Shop        ,Electric Shoe Horns      ,        23,      1.25
Fly Pie Shoe Shop        ,Sock Parts               ,         9,      0.95
Fly Pie Shoe Shop        ,Toe Girdles              ,        44,      1.50
Fly Pie Shoe Shop        ,Plaid Shoe Polish        ,         3,      2.98
Good Man Hardware        ,Inflatable Doorknobs     ,        12,      1.79
Good Man Hardware        ,Hammer Cleaner           ,        12,      1.00
Good Man Hardware        ,Staple Gun Sights        ,        15,      1.15
3-D House of Beauty      ,Pancake Hair Spray       ,        32,      3.89
3-D House of Beauty      ,Eyelid Wax               ,        15,      2.50
3-D House of Beauty      ,Dimple Putty             ,         4,      3.50

SAS

INPUT name  1-25 part  27-51 quantity  53-62 price 64-73;
Fly Pie Shoe Shop         Electric Shoe Horns       23          1.25
Fly Pie Shoe Shop         Sock Parts                9           0.95
Fly Pie Shoe Shop         Toe Girdles               44          1.50
Fly Pie Shoe Shop         Plaid Shoe Polish         3           2.98
Good Man Hardware         Inflatable Doorknobs      12          1.79
Good Man Hardware         Hammer Cleaner            12          1.00
Good Man Hardware         Staple Gun Sights         15          1.15
3-D House of Beauty       Pancake Hair Spray        32          3.89
3-D House of Beauty       Eyelid Wax                15          2.50
3-D House of Beauty       Dimple Putty              4           3.50

29.5  EMS

EMS is the Electronic Mail System used on the mainframe at Stanford. EMS organizes your mail in records of a SPIRES database. The database is called: MSGFILE.

EMS has been ported to Desktop and Laptop systems where you can migrate your MSGFILE before the mainframe is decommissioned. There is a special format available that allows you to convert your MSGFILE records into a form readable by Eudora.

  EMS> attach .msgfile
  EMS> set format **ORV.GQ.EMS.PEMS.EUDORA
  EMS> find ... ; criteria to find records.
  EMS> in active clean type

The Active File created by the "in active clean type" command will contain the set of records found by your search request. You could also use Global-FOR and DISplay commands to obtain a set of records. Whatever method you use, the Active File can be saved as a file that is understood by Eudora as a "mailbox".

  EMS> save mymail.mbx

You should move the saved Active File to your Mail Folder inside your Eudora Folder.

29.6  Full-Screen Key-Sequences

  Key(s)          Sequence
  Bsp/delete      CTRL-H    <-  Backspace
  DEL             CTRL-K    <-  Del (07F)
  DnArrow         CTRL-D    <-  KEY_DOWN
  End             ESC E
  ESC             CTRL-[        Esc (01B)
  F1              ESC 1     <-  KEY_A1
  F2              ESC 2     <-  KEY_B2
  F3              ESC 3     <-  KEY_A3
  F4              ESC 4     <-  KEY_F(5)
  F5              ESC 5     <-  KEY_F(6)
  F6              ESC 6     <-  KEY_F(7)
  F7              ESC 7     <-  KEY_F(9)
  F8              ESC 8     <-  KEY_F(10)
  F9              ESC 9     <-  KEY_F(0)
  F10             ESC 0     <-  KEY_C1
  Home            CTRL-A
  Ins             ESC I     <-  Toggle INSERT
  Kpd .           DEL C     <-  KEY_C3
  Kpd +           DEL W     <-  KEY_F(8)
  Kpd *           CTRL-E
  Kpd -           ESC I     <-  Toggle INSERT
  Kpd /           CTRL-F
  Kpd =           CTRL-N
  Kpd clear       CTRL-P
  Kpd enter       CTRL-C
  Kpd 1           ESC 1     <-  KEY_A1
  Kpd 2           ESC 2     <-  KEY_B2
  Kpd 3           ESC 3     <-  KEY_A3
  Kpd 4           ESC 4     <-  KEY_F(5)
  Kpd 5           ESC 5     <-  KEY_F(6)
  Kpd 6           ESC 6     <-  KEY_F(7)
  Kpd 7           ESC 7     <-  KEY_F(9)
  Kpd 8           ESC 8     <-  KEY_F(10)
  Kpd 9           ESC 9     <-  KEY_F(0)
  Kpd 0           ESC 0     <-  KEY_C1
  LfArrow         CTRL-L    <-  KEY_LEFT
  Linefeed        CTRL-J    like CTRL-D
  PgDn            ESC N
  PgUp            ESC P
  Return          CTRL-M
  RtArrow         CTRL-R    <-  KEY_RIGHT
  TAB             CTRL-I
  UpArrow         CTRL-U    <-  KEY_UP
        Full-screen Reference Sheet
  Sequence            Function
  CTRL-A    (HOME)    Move cursor to/from command line
  CTRL-B              Move to bottom line of screen
  CTRL-C              Attention/Interrupt
  CTRL-D              Move down one line
  CTRL-E              Move to end of field (right,down)
  CTRL-F              Move to start of field (left,up)
  CTRL-G    (BELL)    Ring bell
  CTRL-H              Backspace
  CTRL-I    (TAB)     Move to next tab setting
  CTRL-J              Linefeed, same as CTRL-D
  CTRL-K    (DEL)     Alternate for DEL
  CTRL-L              Move left one character
  CTRL-M    (RETURN)  Return
  CTRL-N              Move to next word
  CTRL-O              (Not used)
  CTRL-P              Move to previous word
  CTRL-Q              Ignore
  CTRL-R              Move right one character
  CTRL-S              Ignore
  CTRL-T              Move to top line of screen
  CTRL-U              Move up one line
  CTRL-V    (INSERT)  Toggle Insert mode
  CTRL-W              Move to previous tab setting
  CTRL-X              Ignore
  CTRL-[              This is ESC, see ESC below
  DEL C               Delete character
  DEL E               Delete current-to-end of line
  DEL F               Delete first-to-current of line
  DEL N               Delete current-to-end in word
  DEL P               Delete first-to-current in word
  DEL W               Delete current word (DEL P, DEL N)
  ESC [               See ESC-[ table
  ESC TAB             Move to previous tab setting
  ESC D               Move to last field
  ESC H (or h)        Home
  ESC I (or i)        Toggle Insert mode
  ESC O (upper only)  See ESC-O table
  ESC U               Move to first field
  ESC V (or v)        Refresh screen
  ESC DEL             Ignore
         ESC-O Table:
  ESC O A             Move up one line
  ESC O B             Move down one line
  ESC O C             Move right one character
  ESC O D             Move left one character
  ESC O M             Attention/Interrupt
  ESC O P             Move to previous word
  ESC O Q             Move to next word
  ESC O R             Move to start of field (left,up)
  ESC O S             Move to end of field (right,down)
  ESC O l (lower L)   Same as DEL
  ESC O m             Toggle Insert mode
  ESC O n             Delete character
  ESC O {p->y}        Same as ESC {0->9} respectively
         ESC-[ Table:
  ESC [ {CTRL-key}    Same as CTRL-key
  ESC [ DEL           Same as DEL
  ESC [ 2 ~           Toggle Insert mode
  ESC [ 3 ~           Same as DEL
  ESC [ 5 ~           Move to previous word
  ESC [ 6 ~           Move to next word
  ESC [ {11-15} ~     F1-F5
  ESC [ {17-21} ~     F6-F10
  ESC [ A             Move up one line
  ESC [ B             Move down one line
  ESC [ C             Move right one character
  ESC [ D             Move left one character

29.7  The SPIMSG Command

The SPIMSG command is a master-mode command used by the system administrator to set the "message of the day", which is kept at the very beginning of the COMZ file. The command has three basic forms:

  1.  SPIMSG "Any string with interior quotes doubled"
  2.  SPIMSG Any string without quotes, apostrophes or semi-colons.
  3.  SPIMSG

The last form, with no string, is used to clear any existing message. You must be in master-mode to be able to issue this command. You can check the result with the DUMP COMZ command. If there is no message, the first byte will contain hexadecimal 00. Otherwise, the first byte contains the length of the message followed by the message text.

29.8  Command Retry

Most versions of SPIRES allow command-line editing, and provide for command-line history. As you input a command line (in line-by-line mode, not full-screen mode), you may edit that line before completing it. Your input text is captured in a buffer. Entering printable keys generally inserts new text into the buffer (unless in overwrite mode, see below). Other special keys can be used to modify the text in the buffer. In the description of the keys below, ^n means Control-n, or holding the CONTROL key down while pressing "n". Errors will ring the terminal bell.

 ^A/^E   : Move cursor to beginning/end of the line.
 ^F/^B   : Move cursor forward/backward one character.
 ESC-F   : Move cursor forward one word.
 ESC-B   : Move cursor backward one word.
 ^D      : Delete the character under the cursor.
 ^H, DEL : Delete the character to the left of the cursor.
 ^K      : Kill from the cursor to the end of line.
 ^L      : Redraw current line.
 ^O      : Toggle overwrite/insert mode. Initially in insert mode.
    Text added in overwrite mode (including yanks) overwrite
    existing text, while insert mode does not overwrite.
 ^P/^N   : Move to previous/next item on history list.
 ^R/^S   : Perform incremental reverse/forward search for string on
    the history list.  Typing normal characters adds to the current
    search string and searches for a match. Typing ^R/^S marks
    the start of a new search, and moves on to the next match.
    Typing ^H or DEL deletes the last character from the search
    string, and searches from the starting location of the last search.
    Therefore, repeated DEL's appear to unwind to the match nearest
    the point at which the last ^R or ^S was typed.  If ^D or ^H is
    repeated until the search string is empty the search location
    begins from the start of the history list.  Typing ESC or
    any other editing character accepts the current match and
    loads it into the buffer, terminating the search.
 ^T      : Toggle (swap) the characters under and to the left of the cursor.
 ^U      : Deletes the entire line
 ^Y      : Yank previously killed text back at current location.
    Note that this will overwrite or insert, depending on the current mode.
 TAB     : By default adds spaces to buffer to get to next TAB stop
    (just after every 8th column).
 NL, CR  : returns current buffer to the program.

DOS and ANSI terminal arrow key sequences are recognized, and act like:

 up     : same as ^P
 down   : same as ^N
 left   : same as ^B
 right  : same as ^F

The following PC keyboard mappings also take place:

 HOME   : same as ^A
 END    : same as ^E
 DELETE : same as ^H
 INSERT : same as ^O
 RETURN : same as NL

:  Appendices

:29  SPIRES Documentation

I. Primers

II. User Language

III. Application Development

IV. Reference Guides (Cards)

V. Prism

VI. SPIRES Aids and Study Materials

VII. Other Related Documents

(The following documents are not SPIRES documents per se, but describe utilities and programs that may be useful in developing SPIRES applications.)

Obtaining Documentation

The above documents (except any marked "in preparation") may be obtained through the PUBLISH command on the Forsythe computer at Stanford University. If you do not use SPIRES at Stanford, contact your local system administrator to find out how SPIRES documents are made available there.

Updates to SPIRES Manuals

SPIRES manuals are updated regularly as changes are made to the system. This does not mean that all manuals are out of date with each new version of SPIRES. The changes to the documentation match those made to SPIRES: they are usually minor and/or transparent. Not having the most current version of a manual may mean you do not have all the most recent information about all the latest features, but the information you do have will usually be accurate.

A public subfile, SPIRES DOC NOTES, contains information about changes to SPIRES manuals. Using this subfile, you can determine whether the manual you have has been updated and if so, how significant those updates are. You need to know the date your manual was published, which is printed at the top of each page. For details on the procedure, issue the command SHOW SUBFILE DESCRIPTION SPIRES DOC NOTES.


INDEX


$DECIMAL FUNCTION   18.1.3
$DELETE VARIABLE   9.1
$EDIT FUNCTION   18.1.3
$GETCVAL FUNCTION   20.1
                    12
$GETIVAL FUNCTION   20.1
                    12
$GETUVAL FUNCTION   20.1
                    12
$GETXVAL FUNCTION   20.1
                    12
$GETXVAL FUNCTIONS, AND PHANTOM STRUCTURES   23.2
$LOOKSUBF FUNCTION, AND NON-UNIQUE SUBFILE NAMES   2
$PACK FUNCTION   18.1.3
$PATHCUR VARIABLE   14.8
$PATHFIND FUNCTION   14.8
$PATHINFO FUNCTION   14.8
$PATHNUM VARIABLE   14.2
$PRECISION FUNCTION   18.1.3
$PROTOCOL FORMAT   28.3.3
$REMAINDER FUNCTION   18.1.3
$SORTCODE VARIABLE CODES   1.4.1
$SUBF.LOOKUP PROC, AND NON-UNIQUE SUBFILE NAMES   2
$TRANSACTIONS RECORD-TYPE   23.3
                            23.1
$TRANSFER VARIABLE   14.10
$WINDOW FUNCTION   18.1.3
$ZAP FUNCTION   20.1
- ELEMENT   5.1
ADD COMMAND, FOR A REFERENCED RECORD   12.1
ADD COMMAND, IN PARTIAL PROCESSING   12.3
ADD COMMAND, WITH DATA OPTION   25
ADDMERGE COMMAND, WITH DATA OPTION   25
ADDUPDATE COMMAND, WITH DATA OPTION   25
ALSO COMMAND, WITH MULTIPLY DEFINED ELEMENTS   5
ARITHMETIC, DEFAULT   18.1.3
ATTACH COMMAND, THROUGH PREFIX   14.3
ATTACH SET COMMAND   17.2
ATTACH TEMPORARY FILE COMMAND   24
ATTACH TFILE COMMAND   24
A65 RULE, AND NON-UNIQUE SUBFILE NAMES   2
CHANGE GENERATION   7.1
CHANGE GENERATION, OVERVIEW   9
CHANGE GENERATION, PROCEDURAL DETAILS   9.1
CHANGE GENERATION, THE CHANGES SUBFILE   9.2
CLEAR DECLARED ELEMENT(S) COMMAND   20.1
CLEAR DEFINED ELEMENT(S) COMMAND   20.1
CLEAR DYNAMIC ELEMENT(S) COMMAND   20.1
CLEAR FILE COUNTS COMMAND   13.5
CLEAR FILTER COMMAND   21.3
CLEAR FILTERS COMMAND   21.3
CLEAR LOCK COMMAND   16
CLEAR METACCT COMMAND   3.2
CLEAR OUTPUT CONTROL COMMAND   7.2
CLEAR PATH COMMAND   14.7
CLEAR PATH ENVIRONMENT COMMAND   14.7
CLEAR REFERENCE COMMAND   12.1
CLEAR SINFO COMMAND   13.4
CLEAR SUBGOALS COMMAND   14.11
CLEAR TABLE COMMAND   17.2
CLEAR TABLES COMMAND   27.3
CLEAR VGROUPS COMMAND   14.5
COMMAND RETRY   29.8
COMMENTS, THROW-AWAY   5.1
COMPILING OTHER USER'S RECORDS IN SYSTEM SUBFILES   3.2
DATA MOVE DECLARES SUBFILE   7.4
                             6.1
DATA MOVE FOR SUBFILE OUTPUT   7.3.1
DATA MOVE FOR TABLE OUTPUT   7.3.2
DATA MOVE PROCESSING   7.3
DATA MOVE PROCESSING EXAMPLE   7.5
DATA OUTPUT CONTROL SUBFILE   6.1
DEBUGGING COMMANDS, EMULATOR   15.3
DEBUGGING, PERFORM SYSDUMP COMMAND   28.10
DECARE SUBFILE SUBNAME   24.2
DECLARE DATA SUBFILES   6.1
DECLARE DATA, STORING   6
DECLARE ELEMENT COMMAND   20.3
DECLARE ELEMENT SUBFILE DESCRIPTION   7.4
DECLARE EXTERNAL DATA   26
DECLARE FILE   24.1
DECLARE INPUT CONTROL COMMAND   11.1
DECLARE INPUT TABLE COMMAND   17.3
DECLARE OUTPUT CONTROL COMMAND   7.1
DECLARE TABLE COMMAND   17.1
DECLARED DYNAMIC ELEMENTS   20.3
DECLARED ELEMENTS   20.3
DEFAULT ARITHMETIC   18.1.3
DEFINE DISPLAY SET COMMAND, WITH TABLE OPTION   17.2
DEFINE ELEMENT COMMAND   20.1
DEFINE ELEMENT COMMAND, TO DEFINE PHANTOM STRUCTURES   23.3
DEFINE SET COMMAND   1.3.1
DEFINE SET COMMAND, DIRECT OPTION   1.8.1
DEFINE SET COMMAND, DISPLAY OPTION   1.9
DEFINE TABLE COMMAND   27.1
DEFINE TABLE COMMAND SET   27
DEFINED DYNAMIC ELEMENTS   20.1
DESTINATION ATTACHED AND POSTSCRIPT, PERFORM SYSTEM PRINT   28.6.3
DIRECT SETS   1.8
DIRECT SETS, AND FILTERS   1.8.3
DISPLAY COMMAND, IN PARTIAL PROCESSING   12.3
DISPLAY SETS   1.9
               7.1
DISPLAY-DECIMALS STATEMENT   18.3
DOCUMENTATION, BIBLIOGRAPHY   :29
DOCUMENTATION, PUBLISHING   28.2
DUMP BLOCK COMMAND   15.1
DUMP CHAR COMMANDS   15.5
DUMP RECORD COMMAND   15.4
DYNAMIC ELEMENTS   20
DYNAMIC ELEMENTS, AND GLOBAL FOR   20.6
DYNAMIC ELEMENTS, AND WHERE CLAUSES   20.6
DYNAMIC ELEMENTS, DECLARED   20.3
DYNAMIC ELEMENTS, DEFINING   20.1
DYNAMIC ELEMENTS, ERRORS   20.7
DYNAMIC ELEMENTS, USING   20.4
DYNAMIC ELEMENTS, WITH FILTERS   21.5
DYNAMIC ELEMENTS, WITH FORMATS   20.5
DYNAMIC ELEMENTS, WITH SECONDARY ELEMENTS   20.2
DYNAMIC PHANTOM STRUCTURES   23.3
EDIT MASKS   19
EDIT MASKS, CHECK PROTECTION SYMBOL   19.1
EDIT MASKS, CURRENCY SYMBOLS   19.3
EDIT MASKS, DECIMAL POINT INDICATION   19.2
EDIT MASKS, DIGIT SPECIFIERS   19.1
EDIT MASKS, FLOATING CHARACTERS   19.4
EDIT MASKS, FORMAL SYNTAX RULES   19.6
EDIT MASKS, INSERTION CHARACTERS   19.5
EDIT MASKS, LEADING ZEROS   19.1
EDIT MASKS, NUMERAL REPRESENTATION   19.1
EDIT MASKS, SIGN SYMBOLS   19.3
EDIT MASKS, ZERO SUPPRESSION   19.1
ELEMENT FILTERS, AND DYNAMIC ELEMENTS   21.5
ELEMENT FILTERS, BASIC USES   21.1
ELEMENT FILTERS, BY OCCURRENCE COUNT   21.4
ELEMENT FILTERS, DISPLAYING AND CLEARING   21.3
ELEMENT FILTERS, INTRODUCTION   21
ELEMENT FILTERS, LIMITATIONS   21.2
ELEMENT FILTERS, SETTING ADDITIONAL   21.1.1
ELEMENTS, DYNAMIC AND WHERE CLAUSES   20.6
ELEMENTS, DYNAMIC WITH FORMATS   20.5
ELEMENTS, MULTIPLY DEFINED   5
EMS   29.5
ENDDECLARE COMMAND   11.1
                     7.1
ENDFOR COMMAND, IN PARTIAL PROCESSING   12.2
ENTER EXPORTER COMMAND   29
ERASE COMMAND, TO DISCARD SETS   1.2
ERROR CODES, IN SPISORT   1.4.1
ESTABLISH COMMAND, AND NON-UNIQUE SUBFILE NAMES   2
EUDORA FORMAT, EMS   29.5
EXPONENTS OF PACKED DECIMALS   18.1.1
EXPORT COMMAND   29.2
EXPORTER   29
EXPORTER, COMMANDS   29.2
EXPORTER, DATA STRUCTURE SELECTION SCREEN   29.1.2
EXPORTER, ELEMENT SPECIFICATION SCREEN   29.1.3
EXPORTER, FIELD LAYOUT SCREEN   29.1.4
EXPORTER, IMPORTING DATA TO TARGET SYSTEM   29.3
EXPORTER, INPUT SCREENS   29.1
EXPORTER, SAMPLE DATA   29.4
EXPORTER, TARGET SYSTEM SELECTION SCREEN   29.1.1
EXPORTING DATA   29
EXTERNAL DATA   8.2
EXTERNAL FILE RECORD PROCESSING   8.1
EXTERNAL FILES, EXAMPLES   8.4
EXTERNAL FILES, EXTERNAL DATA ELEMENT DESCRIPTIONS   8.3
EXTERNAL FILES, INTRODUCTION   8
EXTERNAL SUBFILE   8.2
EXTERNAL SUBFILES, INTRODUCTION   8
FILE DEFINITIONS, SUMMARY OF   28.4
FILTERS, AND DIRECT SETS   1.8.3
FILTERS, AND DYNAMIC ELEMENTS   21.5
FILTERS, AND SPISORT   1.7
FILTERS, AND THE REFERENCE COMMAND   12
FILTERS, BASIC USES   21.1
FILTERS, BY OCCURRENCE COUNT   21.4
FILTERS, DISPLAYING AND CLEARING   21.3
FILTERS, INTRODUCTION   21
FILTERS, LIMITATIONS   21.2
FIX BLOCK COMMAND   15.2
FONT TABLE, PERFORM SYSTEM PRINT   28.6.1
FOR * COMMAND, IN PARTIAL PROCESSING   12.3
FOR * COMMAND, WITH REFERENCED RECORD   12.6
FOR ** COMMAND, IN PARTIAL PROCESSING   12.3
FOR ELEMENT COMMAND, IN PARTIAL PROCESSING   12.2
FOR INDEX COMMAND, COMPARED WITH OTHER SORTING METHODS   1.1
FOR SET COMMAND, DIRECT OPTION   1.8.2
FOR SET COMMAND, NOTES   1.5
FOR SET COMMAND, UNFILTERED OPTION   1.7
FOR SUBFILE COMMAND, COMPARED WITH OTHER SORTING METHODS   1.1
FORMAT, ZAP OF COMPILED DATA   4.1
FORMATS, AND DYNAMIC ELEMENTS   20.5
FORMATTING-CHARACTER TABLE, PERFORM SYSTEM PRINT   28.6.2
FULL-SCREEN KEY-SEQUENCES   29.6
GENERATE MESSAGES COMMAND   15.5
GENERATE REFERENCE COMMAND   12.1
GENERATE SET COMMAND   1.3.2
GENERATE SET COMMAND, AND DIRECT SETS   1.8.1
GENERATE SET COMMAND, WITH DISPLAY SETS   1.9
GENERATE TABLES COMMAND   27.2
HYPHEN ELEMENT   5.1
I/O MONITORING COMMANDS   13
INCLOSE COMMAND   12.8
INFINITY IN PACKED DECIMALS   18.1.2
INPUT CONTROL   11
INPUT CONTROL COMMANDS   11.2
INPUT CONTROL COMMANDS -- SAMPLE 1: DECLARE TABLES   11.2.1
INPUT CONTROL COMMANDS -- SAMPLE 2: DECLARE INPUT TABLES   11.2.2
INPUT CONTROL COMMANDS -- SAMPLE 3: INPUT RECDEF DEFINITIONS   11.2.3
INPUT CONTROL COMMANDS -- SAMPLE 4: INPUT CONTROL PROTOCOL   11.2.4
INPUT CONTROL COMMANDS -- SAMPLE 5: PROCESSING   11.2.5
INPUT CONTROL DECLARATION   11.1
INPUT CONTROL PACKETS   11.1
INPUT TABLES, DECLARING   17.3
INPUT TABLES, STATEMENTS   17.3
ISQL   27.1
KEY-SEQUENCES, FULL-SCREEN   29.6
LOCKS   16
MAGNITUDE OF PACKED DECIMALS   18.1.1
MERGE COMMAND, IN PARTIAL PROCESSING   12.3
MERGE COMMAND, WITH DATA OPTION   25
METACCT SUBFILE   3.1
MONITORING I/O   13
MULTIPLY DEFINED ELEMENTS   5
OBJECT DECK MAINTENANCE COMMANDS   15.5
OUTPUT CONTROL   7
OUTPUT CONTROL DECLARATION   7.1
OUTPUT CONTROL PACKETS   7.1
OUTPUT CONTROL, COMMANDS   7.2
OVERLAY OPTION, SET FILTER COMMAND   21.1.1
PACKED DECIMALS   18
PACKED DECIMALS, ARITHMETIC   18.4
PACKED DECIMALS, AS ELEMENTS   18.2
PACKED DECIMALS, AS VARIABLES   18.3
PACKED DECIMALS, EXPONENT   18.1.1
PACKED DECIMALS, FUNCTIONS FOR   18.1.3
PACKED DECIMALS, INDEXING   18.2
PACKED DECIMALS, INFINITY   18.1.2
PACKED DECIMALS, INPUT FORMS   18.1.2
PACKED DECIMALS, MAGNITUDE   18.1.1
PACKED DECIMALS, NOTATION   18
PACKED DECIMALS, OUTPUT FORMS   18.1.2
PACKED DECIMALS, PRECISION   18.1.1
PACKED DECIMALS, PRIMER   18
PACKED DECIMALS, SORTING   18.2
PACKED DECIMALS, SPIRES VS. STANDARD   18.5
PARTIAL FOR   12.1
PARTIAL PROCESSING   12.1
PARTIAL PROCESSING, PROCESSING COMMANDS   12.3
PARTIAL PROCESSING, RECORD NAVIGATION COMMANDS   12.2
PARTIAL RECORD PROCESSING   12.1
PATH PROCESSING, ALTERNATE FORMAT   14.4
PATH PROCESSING, ALTERNATE SUBFILE   14.10
                                     14.3
PATH PROCESSING, ALTERNATE VGROUP   14.5
PATH PROCESSING, CLEARING PATHS   14.7
PATH PROCESSING, DEFAULT PATH   14.6
PATH PROCESSING, PRIMARY PATH   14.1
PATH PROCESSING, SIMULTANEOUS TRANSFERS AND REFERENCES   14.10
PATH PROCESSING, THROUGH PREFIX   14.2
PERFORM BUILD COMMAND   28.3
PERFORM BUILD PROTOCOLS COMMAND   28.3.1
PERFORM BUILD PROTOCOLS COMMAND, FILE DEFINITION   28.3.2
PERFORM CHNGREF   28.9
PERFORM COMMANDS   28
PERFORM FILEDEF SUMMARY COMMAND   28.4
PERFORM FORMAT LIST COMMAND   28.5
PERFORM PRINT COMMAND   28.1
PERFORM PRINT COMMAND, ERROR MESSAGES   28.1.2
PERFORM PRINT COMMAND, WITH PROTOCOL FILES   28.1.3
PERFORM PUBLISH COMMAND   28.2
PERFORM SYSDUMP COMMAND   28.10
PERFORM SYSTEM MAIL COMMAND   28.8
PERFORM SYSTEM PRINT COMMAND   28.6
PERFORM SYSTEM SEND COMMAND   28.7
PERFORM TABLE CREATE EXAMPLE   7.5
PHANTOM STRUCTURES   23
PHANTOM STRUCTURES, CAPABILITIES   23.2
PHANTOM STRUCTURES, CODING   23.1
PHANTOM STRUCTURES, DYNAMIC   23.3
PHANTOM STRUCTURES, RESTRICTIONS   23.2
PHANTOM STRUCTURES, SUBFILE   23.1
PHANTOM STRUCTURES, SUBGOAL   23.1
PRECISION OF PACKED DECIMALS   18.1.1
PRINT CHARS OPTION   28.6.1
PRINT COMMAND, UNIX EMULATOR   28.6
PRINT TRC OPTION   28.6.1
PROTOCOLS FILE DEFINITION   28.3.2
PROTOCOLS, DISPLAY OF   28.3.3
PROTOCOLS, INPUT OF   28.3.3
RECDEF, ZAP OF COMPILED DATA   4.2
RECORD DEFINITIONS, FOR TEMPORARY FILES   24
RECORD LOCKING   16
REFERENCE COMMAND   12
REFERENCE COMMAND, IN PARTIAL PROCESSING   12.3
REFERENCE COMMAND, TO BEGIN PARTIAL PROCESSING   12.1
REMOVE COMMAND, IN PARTIAL PROCESSING   12.3
RETRY, COMMAND   29.8
RETURN CODES, IN SPISORT   1.4.1
S PARAMETER, SPIBILD COMMAND   13.1
S PARAMETER, SPIRES COMMAND   13.1
SELECT COMMAND, AND NON-UNIQUE SUBFILE NAMES   2
SELECT COMMAND, THROUGH PREFIX   14.3
SELECT COMMAND, WITH TEMPORARY FILES   24
SEQUENCE COMMAND, COMPARED WITH OTHER SORTING METHODS   1.1
SEQUENCE COMMAND, WITH MULTIPLY DEFINED ELEMENTS   5
SET ATTACH LOCK COMMAND   16.1
SET DECLARE PATH COMMAND   6.2
SET DEFAULT PATH COMMAND   14.6
SET ELEMENTS COMMAND, WITH MULTIPLY DEFINED ELEMENTS   5
SET FILE COUNTS COMMAND   13.5
SET FILTER COMMAND   21.1
SET FILTER COMMAND, "(OCC)" OPTION   21.4
SET FILTER COMMAND, "IN LIMIT" OPTION   21.4
SET FILTER COMMAND, AND SPISORT   1.7
SET FILTER OVERLAY COMMAND   21.1.1
SET FLAG COMMAND   22
SET IFTEST COMMAND   22
SET LOCK COMMAND   16
SET METACCT COMMAND   3.2
SET NEXT PATH COMMAND   14.6
SET NOIFTEST COMMAND   22
SET NOSINFO COMMAND   13.2
SET NOXEQDATA   10.2
SET PATH COMMAND   14.6
SET PLOCK COMMAND   16
SET PRIMARY PATH COMMAND   14.6
SET PRIVATE LOCK COMMAND   16
SET REFERENCE ELEMENTS COMMAND   12.3
SET SHARED LOCK COMMAND   16
SET SINFO COMMAND   13.1
SET SLOCK COMMAND   16
SET SUBGOAL COMMAND   14.3
SET TABLE COMMAND   17.2
SET XEQDATA COMMAND   10.2
SETS   1.2
SETS, DIRECT   1.8
SETS, DISPLAY   1.9
SHOW DECLARED ELEMENTS COMMAND   20.1
SHOW DEFINED ELEMENTS COMMAND   20.1
SHOW DYNAMIC ELEMENTS COMMAND   20.1
SHOW FILE COUNTS COMMAND   13.5
SHOW FILES ATTACHED COMMAND   13.3
SHOW FILTERS COMMAND   21.3
SHOW LEVELS, IN PARTIAL PROCESSING   12.5
SHOW METACCT COMMAND   3.2
SHOW REFERENCE ELEMENTS COMMAND   12.2
SHOW SET INFORMATION COMMAND   1.6
SHOW SINFO COMMAND   13.3
SHOW SUBFILE INFORMATION   14.8
SHOW SUBFILE INFORMATION COMMAND   13.6
SHOW TABLES COMMAND   27.4
SINFO   13
SKIP COMMAND, IN PARTIAL PROCESSING   12.3
SORTING RECORDS   1
SORTING RECORDS, OVERVIEW OF METHODS   1.1
SPIMSG COMMAND   29.7
SPISORT CATALOGED PROCEDURE   1.4.2
SPISORT COMMAND   1.4
SPISORT ERROR CODES   1.4.1
SPISORT RETURN CODES   1.4.1
SPISORT, AND FILTERS   1.7
SPISORT, COMPARED WITH OTHER SORTING METHODS   1.1
SPISORT, DIRECT SETS   1.8
SPISORT, DISCARDING SETS   1.2
SPISORT, GENERAL CAPABILITIES   1.2
SPISORT, SHOW SET INFORMATION COMMAND   1.6
SPISORT, SUMMARY OF PROCEDURE   1.3
SQL   27.1
STATIC VARIABLES, ZAP OF STORED DATA   4.5
STRUCTURES, PHANTOM   23
SUBFILE NAMES, NON-UNIQUE   2
SUBFILE STATEMENT IN PHANTOM STRUCTURES   23.1
SUBFILE TABLES, INTRODUCTION   17
SUBGOAL PROCESSING, CLEARING SUBGOALS   14.11
SUBGOAL PROCESSING, GENERAL INFORMATION   14.11
SUBGOAL STATEMENT, IN DEFINING PHANTOM STRUCTURES   23.1
SYS PROTO, ZAP OF COMPILED DATA   4.3
SYSTEM SUBFILES, GIVING USERS ACCESS TO YOUR RECORDS   3.1
SYSTEM SUBFILES, USING OTHER USER'S RECORDS   3.2
TABLES, DECLARING   17.1
TABLES, INTRODUCTION   17
TABLES, STATEMENTS   17.1
TABLES, USING   17.2
TEMPORARY FILES   24
THROUGH PREFIX IN PATH PROCESSING   14.2
THROW-AWAY ELEMENT   5.1
THRU PREFIX IN PATH PROCESSING   14.2
TRANSACTION RECORDS   23.3
                      23.1
TRANSFER COMMAND, IN PARTIAL PROCESSING   12.3
TYPE COMMAND, WITH MULTIPLY DEFINED ELEMENTS   5
UPDATE COMMAND, FOR A REFERENCED RECORD   12.1
UPDATE COMMAND, IN PARTIAL PROCESSING   12.3
UPDATE COMMAND, WITH DATA OPTION   25
USING OTHER USER'S RECORDS IN SYSTEM SUBFILES   3.2
VERSION NUMBERS, FOR SYSTEM SUBFILE RECORDS   3.3
VERSION-ACCT STATEMENT   3.3
VERSION-NUMBER STATEMENT   3.3
VERSION-STR STRUCTURE   3.3
VGROUP, ZAP OF COMPILED DATA   4.4
VIA STATEMENT, FOR PHANTOM STRUCTURES   23.1
WHERE CLAUSE, IN PARTIAL PROCESSING   12.2
WITH DATA OPTION   25
WITH DECLARE PREFIX   6.2
WITH INPUT CONTROL PREFIX   11.2
WITH OUTPUT CONTROL PREFIX   7.2
XEQ DATA   10
XEQ DATA COMMAND   10.3
XEQ DATA MSEMPROC ACTIONS   10.4
XEQ DATA SAMPLE PROTOCOL   10.6
XEQ DATA SETUP   10.2
XEQ DATA STRUCTURE   10.1
XEQ DATA SUBFILE   10.5
XEQ DATA XSEMPROC ACTIONS   10.4
ZAP COMMAND, FOR COMPILED OBJECT CODE   4
ZAP FORMAT COMMAND SYNTAX   4.1
ZAP PROTOCOL COMMAND   4.3
ZAP RECDEF COMMAND SYNTAX   4.2
ZAP STATIC COMMAND SYNTAX   4.5
ZAP SYSPROTO COMMAND SYNTAX   4.3
ZAP VGROUP COMMAND SYNTAX   4.4
ZONED INPUT   18.5